text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Politics and International Relations (1)
Proceedings of the International Astronomical Union (4)
The British Journal of Psychiatry (2)
Animal Conservation forum (1)
Annals of Human Genetics (1)
Epidemiology and Psychiatric Sciences (1)
Geological Magazine (1)
International Astronomical Union Colloquium (1)
Journal of the Marine Biological Association of the United Kingdom (1)
PS: Political Science & Politics (1)
Powder Diffraction (1)
Psychological Medicine (1)
The Journal of Anatomy (1)
The Journal of Laryngology & Otology (1)
International Astronomical Union (5)
The Royal College of Psychiatrists (2)
MBA Online Only Members (1)
Race, Ethnicity, and Politics section of the American Political Science Association (APSA) (1)
Ryan Test (1)
The Australian Society of Otolaryngology Head and Neck Surgery (1)
test society (1)
Cambridge Handbooks in Psychology (1)
Ecology, Biodiversity and Conservation (1)
Cambridge Handbooks (1)
Cambridge Handbooks of Psychology (1)
Enhanced Arctic-Tethys connectivity ended the Toarcian Oceanic Anoxic Event in NW Europe
B. van de Schootbrugge, A. J. P. Houben, F. E. Z. Ercan, R. Verreussel, S. Kerstholt, N. M. M. Janssen, B. Nikitenko, G. Suan
Journal: Geological Magazine , First View
Published online by Cambridge University Press: 13 December 2019, pp. 1-19
The Toarcian Oceanic Anoxic Event (T-OAE, c. 182 Ma) represents a major perturbation of the carbon cycle marked by widespread black shale deposition. Consequently, the onset of the T-OAE has been linked to the combined effects of global warming, high productivity, basin restriction and salinity stratification. However, the processes that led to termination of the event remain elusive. Here, we present palynological data from Arctic Siberia (Russia), the Viking Corridor (offshore Norway) and the Yorkshire Coast (UK), all spanning the upper Pliensbachian – upper Toarcian stages. Rather than a 'dinoflagellate cyst black-out', as recorded in T-OAE strata of NW Europe, both the Arctic and Viking Corridor records show high abundance and dinoflagellate diversity throughout the T-OAE interval as calibrated by C-isotope records. Significantly, in the Arctic Sea and Viking Corridor, numerous species of the Parvocysta and Phallocysta suites make their first appearance in the lower Toarcian Falciferum Zone much earlier than in Europe, where these key dinoflagellate species appeared suddenly during the Bifrons Zone. Our results indicate migrations of Arctic dinoflagellate species, driven by relative sea-level rise in the Viking Corridor and the establishment of a S-directed circulation from the Arctic Sea into the Tethys Ocean. The results support oceanographic models, but are at odds with some interpretations based on geochemical proxies. The migration of Arctic dinoflagellate species coincides with the end of the T-OAE and marks the arrival of oxygenated, low-salinity Arctic waters, triggering a regime change from persistent euxinia to more dynamic oxygen conditions.
The properties of bright globular clusters, ultra-compact dwarfs and dwarf nuclei in the Virgo core: hints on origin of ultra-compact dwarf galaxies (UCDs)
Chengze Liu, Eric W. Peng, Patrick Côté, Hong-Xin Zhang, Laura Ferrarese, Andrés Jordán, J. Christopher Mihos, Roberto P. Muñoz, Thomas H. Puzia, Ariane Lançon, Stephen Gwyn, Jean-Charles Cuillandre, John P. Blakeslee, Alessandro Boselli, Patrick R. Durrell, Pierre-Alain Duc, Puragra Guhathakurta, Lauren A. MacArthur, Simona Mei, Rubén Sánchez-Janssen, Haiguang Xu
Journal: Proceedings of the International Astronomical Union / Volume 14 / Issue S344 / August 2018
Based on the data from the Next Generation Virgo cluster Survey (NGVS), we statistically study the photometric properties of globular clusters (GCs), ultra-compact dwarfs (UCDs) and dwarf nuclei in the Virgo core (M87) region. We found an obvious negative color (g - z) gradient in GC system associate with M87, i.e. GCs in the outer regions are bluer. However, such color gradient does not exist in UCD system, neither in dwarf nuclei system around M87. In addition, we found that many UCDs are surrounded by extended, low surface brightness envelopes. The dwarf nuclei and UCDs show different spatial distributions from GCs, with dwarf nuclei and UCDs (especially for the UCDs with visible envelopes) lying at larger distances to the Virgo center. These results support the view that UCDs (at least for a fraction of UCDs) are more tied to dwarf nuclei than to GCs.
Pedagogical Value of Polling-Place Observation by Students
Christopher B. Mann, Gayle A. Alberda, Nathaniel A. Birkhead, Yu Ouyang, Chloe Singer, Charles Stewart, Michael C. Herron, Emily Beaulieu, Frederick Boehmke, Joshua Boston, Francisco Cantu, Rachael Cobb, David Darmofal, Thomas C. Ellington, Charles J. Finocchiaro, Michael Gilbert, Victor Haynes, Brian Janssen, David Kimball, Charles Kromkowski, Elena Llaudet, Matthew R. Miles, David Miller, Lindsay Nielson, Costas Panagopoulos, Andrew Reeves, Min Hee Seo, Haley Simmons, Corwin Smidt, Robert Stein, Rachel VanSickle-Ward, Abby K. Wood, Julie Wronski
Journal: PS: Political Science & Politics / Volume 51 / Issue 4 / October 2018
Good education requires student experiences that deliver lessons about practice as well as theory and that encourage students to work for the public good—especially in the operation of democratic institutions (Dewey 1923; Dewy 1938). We report on an evaluation of the pedagogical value of a research project involving 23 colleges and universities across the country. Faculty trained and supervised students who observed polling places in the 2016 General Election. Our findings indicate that this was a valuable learning experience in both the short and long terms. Students found their experiences to be valuable and reported learning generally and specifically related to course material. Postelection, they also felt more knowledgeable about election science topics, voting behavior, and research methods. Students reported interest in participating in similar research in the future, would recommend other students to do so, and expressed interest in more learning and research about the topics central to their experience. Our results suggest that participants appreciated the importance of elections and their study. Collectively, the participating students are engaged and efficacious—essential qualities of citizens in a democracy.
Maternal prenatal depression is associated with decreased placental expression of the imprinted gene PEG3
A. B. Janssen, L. E. Capron, K. O'Donnell, S. J. Tunster, P. G. Ramchandani, A. E. P. Heazell, V. Glover, R. M. John
Journal: Psychological Medicine / Volume 46 / Issue 14 / October 2016
Published online by Cambridge University Press: 15 August 2016, pp. 2999-3011
Maternal prenatal stress during pregnancy is associated with fetal growth restriction and adverse neurodevelopmental outcomes, which may be mediated by impaired placental function. Imprinted genes control fetal growth, placental development, adult behaviour (including maternal behaviour) and placental lactogen production. This study examined whether maternal prenatal depression was associated with aberrant placental expression of the imprinted genes paternally expressed gene 3 (PEG3), paternally expressed gene 10 (PEG10), pleckstrin homology-like domain family a member 2 (PHLDA2) and cyclin-dependent kinase inhibitor 1C (CDKN1C), and resulting impaired placental human placental lactogen (hPL) expression.
A diagnosis of depression during pregnancy was recorded from Manchester cohort participants' medical notes (n = 75). Queen Charlotte's (n = 40) and My Baby and Me study (MBAM) (n = 81) cohort participants completed the Edinburgh Postnatal Depression Scale self-rating psychometric questionnaire. Villous trophoblast tissue samples were analysed for gene expression.
In a pilot study, diagnosed depression during pregnancy was associated with a significant reduction in placental PEG3 expression (41%, p = 0.02). In two further independent cohorts, the Queen Charlotte's and MBAM cohorts, placental PEG3 expression was also inversely associated with maternal depression scores, an association that was significant in male but not female placentas. Finally, hPL expression was significantly decreased in women with clinically diagnosed depression (44%, p < 0.05) and in those with high depression scores (31% and 21%, respectively).
This study provides the first evidence that maternal prenatal depression is associated with changes in the placental expression of PEG3, co-incident with decreased expression of hPL. This aberrant placental gene expression could provide a possible mechanistic explanation for the co-occurrence of maternal depression, fetal growth restriction, impaired maternal behaviour and poorer offspring outcomes.
Origin of ultra-compact dwarfs: a dynamical perspective
Hong-Xin Zhang, Eric W. Peng, Patrick Côté, Chengze Liu, Laura Ferrarese, Jean-Charles Cuillandre, Nelson Caldwell, Stephen D. J. Gwyn, Andrés Jordán, Ariane Lançon, Biao Li, Roberto P. Muñoz, Thomas H. Puzia, Kenji Bekki, John Blakeslee, Alessandro Boselli, Michael J. Drinkwater, Pierre-Alain Duc, Patrick Durrell, Eric Emsellem, Peter Firth, Ruben Sánchez-Janssen
Published online by Cambridge University Press: 07 March 2016, pp. 264-268
Discovery of ultra-compact dwarfs (UCDs) in the past 15 years blurs the once thought clear division between classic globular clusters (GCs) and early-type galaxies. The intermediate nature of UCDs, which are larger and more massive than typical GCs but more compact than typical dwarf galaxies, has triggered hot debate on whether UCDs should be considered galactic in origin or merely the most extreme GCs. Previous studies of various scaling relations, stellar populations and internal dynamics did not give an unambiguous answer to the primary origin of UCDs. In this contribution, we present the first ever detailed study of global dynamics of 97 UCDs (rh ≳ 10 pc) associated with the central cD galaxy of the Virgo cluster, M87. We found that UCDs follow a different radial number density profile and different rotational properties from GCs. The orbital anisotropies of UCDs are tangentially-biased within ~ 40 kpc of M87 and become radially-biased with radius further out. In contrast, the blue GCs, which have similar median colors to our sample of UCDs, become more tangentially-biased at larger radii beyond ~ 40 kpc. Our analysis suggests that most UCDs in M87 are not consistent with being merely the most luminous and extended examples of otherwise normal GCs. The radially-biased orbital structure of UCDs at large radii is in general agreement with the scenario that most UCDs originated from the tidally threshed dwarf galaxies.
By Ainsworth Shaaron, Ayres Paul, Azevedo Roger, Bediou Benoit, Britt Anne, Kirsten R. Butcher, Chen Fei, Michelene T. H. Chi, Richard E. Clark, Ruth Colvin Clark, Sharon J. Derry, David F. Feldon, Fiorella Logan, J. D. Fletcher, Arthur C. Graesser, Hegarty Mary, HU Xiangen, Allison J. Jaeger, Janssen Jeroen, Cheryl I. Johnson, Ton De Jong, Kalyuga Slava, Kester Liesbeth, Kirschner Femke, Paul A. Kirschner, Susanne P. Lajoie, Ard W. Lazonder, Leutner Detlev, Low Renae, Richard K. Lowe, Richard E. Mayer, Benjamin D. Nye, Paas Fred, Pilegard Celeste, Jan L. Plass, Heather A. Priest, Renkl Alexander, Rouet Jean-FranÇois, Christopher A. Sanchez, Scheiter Katharina, Schmeck Annett, Schnotz Wolfgang, Ruth N. Schwartz, Bruce L. Sherin, Miriam Gamoran Sherin, Sweller John, Tobias Sigmund, Tamara Van Gog, Jeroen J. G. Van MerriËNboer, Jennifer Wiley, Alexander P. Wind, Ruth Wylie
Edited by Richard E. Mayer, University of California, Santa Barbara
Book: The Cambridge Handbook of Multimedia Learning
Print publication: 28 July 2014, pp ix-x
Changes in back fat thickness during late gestation predict colostrum yield in sows
R. Decaluwé, D. Maes, I. Declerck, A. Cools, B. Wuyts, S. De Smet, G. P. J. Janssens
Journal: animal / Volume 7 / Issue 12 / December 2013
Published online by Cambridge University Press: 18 November 2013, pp. 1999-2007
Print publication: December 2013
Directing protein and energy sources towards lactation is crucial to optimise milk production in sows but how this influences colostrum yield (CY) remains unknown. The aim of this study was to identify associations between CY and the sow's use of nutrient resources. We included 37 sows in the study that were all housed, fed and managed similarly. Parity, back fat change (ΔBF), CY and performance parameters were measured. We obtained sow serum samples 3 to 4 days before farrowing and at D1 of lactation following overnight fasting. These were analysed for non-esterified fatty acids (NEFA), urea, creatinine, (iso)butyrylcarnitine (C4) and immunoglobulins G (IgG) and A (IgA). The colostrum samples collected 3, 6 and 24 h after the birth of the first piglet were analysed for their nutrient and immunoglobulins content. The technical parameters associated with CY were parity group (a; parities 1 to 3=value 0 v. parities 4 to 7=value 1) and ΔBF D85-D109 of gestation (mm) (b): CY (g)=4290–842a–113b. (R2=0.41, P<0.001). The gestation length (P<0.001) and the ΔBF between D109 and D1 of lactation (P=0.050) were identified as possible underlying factors of the parity group. The metabolic parameters associated with CY were C4 at 3 to 4 days before farrowing (a), and 10logC4 (b) and 10logNEFA (c) at D1 of lactation: CY (g)=3582–1604a+1007b−922c (R2=0.39, P=0.001). The colostrum composition was independent of CY. The negative association between CY and ΔBF D85-D109 of gestation could not be further explained based on our data. Sows that were catabolic 1 week prior to farrowing seemed unable to produce colostrum to their full potential. This was especially the case for sows with parities 4 to 7, although they had a similar feed intake, litter birth weight and colostrum composition compared with parities 1 to 3 sows. In conclusion, this study showed that parity and the use of body fat and protein reserves during late gestation were associated with CY, indicating that proper management of the sow's body condition during late gestation could optimise the intrinsic capacity of the sow's CY.
The cat as a model for human obesity: insights into depot-specific inflammation associated with feline obesity
H. Van de Velde, G. P. J. Janssens, H. de Rooster, I. Polis, I. Peters, R. Ducatelle, P. Nguyen, J. Buyse, K. Rochus, J. Xu, A. Verbrugghe, M. Hesta
Journal: British Journal of Nutrition / Volume 110 / Issue 7 / 14 October 2013
Published online by Cambridge University Press: 23 May 2013, pp. 1326-1335
Print publication: 14 October 2013
According to human research, the location of fat accumulation seems to play an important role in the induction of obesity-related inflammatory complications. To evaluate whether an inflammatory response to obesity depends on adipose tissue location, adipokine gene expression, presence of immune cells and adipocyte cell size of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) were compared between lean and obese cats. Additionally, the present study proposes the cat as a model for human obesity and highlights the importance of animal models for human research. A total of ten chronically obese and ten lean control cats were included in the present study. Body weight, body condition score and body composition were determined. T-lymphocyte, B-lymphocyte, macrophage concentrations and adipocyte cell size were measured in adipose tissue at different locations. Serum leptin concentration and the mRNA expression of leptin and adiponectin, monocyte chemoattractant protein-1, chemoligand-5, IL-8, TNF-α, interferon-γ, IL-6 and IL-10 were measured in blood and adipose tissues (abdominal and inguinal SAT, and omental, bladder and renal VAT). Feline obesity was characterised by increased adipocyte cell size and altered adipokine gene expression, in favour of pro-inflammatory cytokines and chemokines. Consequently, concentration of T-lymphocytes was increased in the adipose tissue of obese cats. Alteration of adipose tissue was location dependent in both lean and obese cats. Moreover, the observed changes were more prominent in SAT compared with VAT.
7 - A multi-criteria approach to equitable fishing rights allocation in South Africa's Western Cape
By Ron Janssen, Alison R. Joubert, Theodor J. Stewart
Edited by Pieter J. H. van Beukering, Vrije Universiteit, Amsterdam, Elissaios Papyrakis, Vrije Universiteit, Amsterdam, Jetske Bouma, Vrije Universiteit, Amsterdam, Roy Brouwer, Vrije Universiteit, Amsterdam
Book: Nature's Wealth
Published online: 05 July 2013
Print publication: 28 March 2013, pp 155-172
Fisheries resources are vulnerable to overexploitation, in large part because of their open-access nature. For long-term ecological and socio-economic sustainability, fisheries therefore need to be regulated by limiting TAC and/or Total Allowable Effort (TAE). It can be argued that to maximize the efficiency of the fisheries sector tradable fishing rights is the way to go. This is the solution implemented successfully in countries such as Iceland, New Zealand, etc. (Arnason 2005, Scott 2000). In many developing countries, however, protection of traditional fishing communities with their subsistence fisheries is added. Objectives of fishing rights allocation can then include poverty reduction and preservation of traditional culture.
This study deals with the fishing rights allocation in South Africa. South Africa's fisheries yields peaked in the 1960s and 1970s, but since then many stocks have declined due to overexploitation. Although the fishing industry has historically been dominated by a few white-owned companies, since the end of apartheid new policies have been introduced to (1) rectify this inequitable distribution of fishing opportunities and (2) improve the sustainability of fisheries (Cockcroft et al. 2002, RSA 1998).
Preventing progression to first-episode psychosis in early initial prodromal states
A. Bechdolf, M. Wagner, S. Ruhrmann, S. Harrigan, V. Putzfeld, R. Pukrop, A. Brockhaus-Dumke, J. Berning, B. Janssen, P. Decker, R. Bottlender, K. Maurer, H.-J. Möller, W. Gaebel, H. Häfner, W. Maier, J. Klosterkötter
Journal: The British Journal of Psychiatry / Volume 200 / Issue 1 / January 2012
Young people with self-experienced cognitive thought and perception deficits (basic symptoms) may present with an early initial prodromal state (EIPS) of psychosis in which most of the disability and neurobiological deficits of schizophrenia have not yet occurred.
To investigate the effects of an integrated psychological intervention (IPI), combining individual cognitive–behavioural therapy, group skills training, cognitive remediation and multifamily psychoeducation, on the prevention of psychosis in the EIPS.
A randomised controlled, multicentre, parallel group trial of 12 months of IPI v. supportive counselling (trial registration number: NCT00204087). Primary outcome was progression to psychosis at 12- and 24-month follow-up.
A total of 128 help-seeking out-patients in an EIPS were randomised. Integrated psychological intervention was superior to supportive counselling in preventing progression to psychosis at 12-month follow-up (3.2% v. 16.9%; P = 0.008) and at 24-month follow-up (6.3% v. 20.0%; P = 0.019).
Integrated psychological intervention appears effective in delaying the onset of psychosis over a 24-month time period in people in an EIPS.
On Our Multi-Wavelength Campaign of the 2011 Outburst of T Pyx†
L. Schmidtobreick, A. Bayo, Y. Momany, V. Ivanov, D. Barria, Y. Beletsky, H. M. J. Boffin, G. Brammer, G. Carraro, W.-J. de Wit, J. Girard, G. Hau, M. Moerchen, D. Nuernberger, M. Pretorius, T. Rivinius, R. Sanchez-Janssen, F. Selman, S. Stefl, I. Yegorova
Journal: Proceedings of the International Astronomical Union / Volume 7 / Issue S285 / September 2011
Published online by Cambridge University Press: 20 April 2012, pp. 404-405
The well-known recurrent nova T Pyx has brightened by 7 magnitudes, starting on 2011 April 14, its first eruption since 1966. T Pyx is unique amongst recurrent novæ in being surrounded by a nebula formed of material ejected during previous eruptions. The latest eruption therefore offers the rare opportunity to observe a light echo sweeping through the existing shell, and a new one forming. The sudden exposure of the existing shell to high-energy light is expected to result in a change of the dust morphology as well as in the part destruction of molecules. We observe this process in the near- and mid-IR during several epochs using ESO's VLT instruments Sinfoni, Visir and Isaac. Unfortunately, in the data analysed so far we only have a tentative detection in Brα from the shell, so might in the end have to be content with upper limits for the emission from the various molecular bands and ionised lines.
Needs-oriented discharge planning for high utilisers of psychiatric services: multicentre randomised controlled trial
B. Puschner, S. Steffen, K. A. Völker, C. Spitzer, W. Gaebel, B. Janssen, H. E. Klein, H. Spiessl, T. Steinert, J. Grempler, R. Muche, T. Becker
Journal: Epidemiology and Psychiatric Sciences / Volume 20 / Issue 2 / June 2011
Aims.
Attempts to reduce high utilisation of mental health inpatient care by targeting the critical time of hospital discharge are rare. In this study, we test the effect of a needs-oriented discharge planning intervention on number and duration of psychiatric inpatient treatment episodes (primary), as well as on outpatient service use, needs, psychopathology, depression and quality of life (secondary).
Four hundred and ninety-one adults with a defined high utilisation of mental health care gave informed consent to participate in a multicentre RCT carried out at five psychiatric hospitals in Germany (Düsseldorf, Greifswald, Regensburg, Ravensburg and Günzburg). Subjects allocated to the intervention group were offered a manualised needs-led discharge planning and monitoring intervention with two intertwined sessions administered at hospital discharge and 3 months thereafter. Outcomes were assessed at four measurement points during a period of 18 months following discharge.
Intention-to-treat analyses showed no effect of the intervention on primary or secondary outcomes.
Process evaluation pending, the intervention cannot be recommended for implementation in routine care. Other approaches, e.g. team-based community care, might be more beneficial for people with persistent and severe mental illness.
On the Lack of Stellar Bars in Coma Dwarf Galaxies
M. Koleva, Ph. Prugniel, I. Vauglin, J. Méndez-Abreu, R. Sánchez-Janssen, J.A.L. Aguerri
We present a study of the bar fraction in the Coma cluster galaxies based on a sample of ~190 galaxies selected from the SDSS-DR6 and observed with the Hubble Space Telescope (HST) Advanced Camera for Survey (ACS). We explore the presence of bars, detected by visual classification, throughout an unprecedented luminosity range of 9 mag (− 23 < Mr < −14). We find that bars are hosted by galaxies in a tight range of both luminosities (−22 < Mr < −17) and masses (109 < M∗/M⊙ < 1011). We find also that the bar fraction does not vary significantly with the distance to the cluster center, implying that cluster environment plays a second-order role in bar formation/evolution. The shape of the bar fraction distribution with respect to both luminosity and mass is well matched by the luminosity distribution of disk galaxies in Coma, indicating that bars are good tracers of cold stellar disks.
By Lise Aksglaede, Yutaka Aoki, Germaine M. Buck Louis, Esther L. Calderon, Sylvaine Cordier, Julie Damm, Leo F. Doherty, Mary A. Fox, Dori R. Germolec, Linda C. Giudice, Andrea C. Gore, K. Leigh Greathouse, Louis J. Guillette Jr., Heather J. Hamlin, Russ Hauser, Jerrold J. Heindel, Patricia Hunt, Taisen Iguchi, Sarah J. Janssen, Anders Juul, Laxmi A. Kondapalli, Robert W. Luebke, Maricel V. Maffini, John D. Meeker, Pauline Mendola, Sinichi Miyagawa, Annette Mouritsen, Retha R. Newbold, Gail S. Prins, Richard M. Sharpe, Niels E. Skakkebaek, Rémy Slama, Gina M. Solomon, Carlos Sonnenschein, Kaspar Sørensen, Ana M. Soto, Tamotsu Sudo, Shanna H. Swan, Hugh S. Taylor, Jorma Toppari, Helena E. Virtanen, Cheryl L. Walker, Teresa K. Woodruff, Tracey J. Woodruff, R. Thomas Zoeller
Edited by Tracey J. Woodruff, University of California, San Francisco, Sarah J. Janssen, University of California, San Francisco, Louis J. Guillette, Jr, University of Florida, Linda C. Giudice, University of California, San Francisco
Book: Environmental Impacts on Reproductive Health and Fertility
Print publication: 28 January 2010, pp -
F-61 Polycapillary Based Confocal Detection Schemes for XRF Micro and Nano-Spectroscopy
B. Vekemans, B. De Samber, T. Schoonjans, G. Silversmit, L. Vincze, R. Evens, K. De Schamphelaere, C. R. Janssen, B. Masschaele, L. Van Hoorebeeke, S. Schmitz, F. Brenker, R. Tucoulou, P. Cloetens, M. Burghammer, J. Susini, C. Riekel
Journal: Powder Diffraction / Volume 24 / Issue 2 / June 2009
Published online by Cambridge University Press: 20 May 2016, p. 169
Statistical properties of mechanically generated surface gravity waves: a laboratory experiment in a three-dimensional wave basin
M. ONORATO, L. CAVALERI, S. FOUQUES, O. GRAMSTAD, P. A. E. M. JANSSEN, J. MONBALIU, A. R. OSBORNE, C. PAKOZDI, M. SERIO, C. T. STANSBERG, A. TOFFOLI, K. TRULSEN
Journal: Journal of Fluid Mechanics / Volume 627 / 25 May 2009
Print publication: 25 May 2009
A wave basin experiment has been performed in the MARINTEK laboratories, in one of the largest existing three-dimensional wave tanks in the world. The aim of the experiment is to investigate the effects of directional energy distribution on the statistical properties of surface gravity waves. Different degrees of directionality have been considered, starting from long-crested waves up to directional distributions with a spread of ±30° at the spectral peak. Particular attention is given to the tails of the distribution function of the surface elevation, wave heights and wave crests. Comparison with a simplified model based on second-order theory is reported. The results show that for long-crested, steep and narrow-banded waves, the second-order theory underestimates the probability of occurrence of large waves. As directional effects are included, the departure from second-order theory becomes less accentuated and the surface elevation is characterized by weak deviations from Gaussian statistics.
Confocal μ–XRF and μ–XAFS Studies of an Uranium-Rich Sediment from a Nuclear Waste Disposal Natural Analogue Site
M A Denecke, K Janssens, J Rothe, U Noseck, R Simon
Journal: Microscopy and Microanalysis / Volume 11 / Issue S02 / August 2005
Extended abstract of a paper presented at Microscopy and Microanalysis 2005 in Honolulu, Hawaii, USA, July 31--August 4, 2005
The luminosity function of galaxies in the Hercules cluster
R. Sánchez-Janssen, J. Iglesias-Páramo, C. Muñoz-Tuñón, J. A. L. Aguerri, J. M. Vílchez
Journal: Proceedings of the International Astronomical Union / Volume 2004 / Issue IAUC195 / March 2004
Print publication: March 2004
We have imaged $\sim 1$ deg$^{2}$ in the V-band in the direction of the Hercules cluster (Abell 2151). The data are used to compute, for the first time, the luminosity function (LF) of galaxies in the cluster down to the dwarf regime (M$_{lim}$ $\sim -13.85$ ). The global LF is well described by a Schechter function (Schechter 1976) with best-fit parameters $\alpha = -1.30 \pm 0.06$ and M$_V^* = -21.25 \pm 0.25$. The radial dependence of the LF has also been studied, finding that it turns out to be almost constant within the errors even further away than the virial radius. Given the presence of significant substructure within the cluster, we have analized the LFs in different regions. While the LFs of the two subclusters present are consistent with each other and with the global one, the southernmost one exhibits a somewhat steeper faint-end slope.To search for other articles by the author(s) go to: http://adsabs.harvard.edu/abstract_service.html
Prebiotics affect nutrient digestibility but not faecal ammonia in dogs fed increased dietary protein levels
M. Hesta, G. P. J. Janssens, S. Millet, R. De Wilde
Journal: British Journal of Nutrition / Volume 90 / Issue 6 / December 2003
Published online by Cambridge University Press: 09 March 2007, pp. 1007-1014
An increased protein content and less digestible protein sources in the diet can induce bad faecal odour. The present study investigated the effect of adding prebiotics to dog diets enriched with animal-derived protein sources on apparent digestibilities and faecal ammonia concentration. In three subsequent periods eight healthy beagle dogs were fed a commercial dog diet that was gradually supplemented by up to 50 % with meat and bone meal (MBM), greaves meal (GM) or poultry meal (PM) respectively. Afterwards, 3 % fructo-oligosaccharides or 3 % isomalto-oligosaccharides were substituted for 3 % of the total diet. Supplementation with animal-derived protein sources did not decrease the apparent N digestibility significantly but oligosaccharides did. On the other hand the bacterial N content (% DM) in the faeces was highest in the oligosaccharide groups followed by the protein-supplemented groups and lowest in the control groups. When the apparent N digestibility was corrected for bacterial N no significant differences were noted anymore except for the GM group where the corrected N digestibility was still lower after oligosaccharide supplementation. The amount of faecal ammonia was significantly increased by supplementing with protein or oligosaccharides in the MBM and GM groups but not in the PM group. When apparent N digestibility is interpreted, a correction for bacterial N should be taken into account, especially when prebiotics are added to the diet. Oligosaccharides did not reduce the faecal ammonia concentrations as expected.
Discrimination and delusional ideation
I. Janssen, M. Hanssen, M. Bak, R. V. Bijl, R. De Graaf, W. Vollebergh, K. McKenzie, J. Van Os
Journal: The British Journal of Psychiatry / Volume 182 / Issue 1 / 02 January 2003
Print publication: 02 January 2003
In the UK and The Netherlands, people with high rates of psychosis are chronically exposed to discrimination.
To test whether perceived discrimination is associated longitudinally with onset of psychosis.
A 3-year prospective study of cohorts with no history of psychosis and differential rates of reported discrimination on the basis of age, gender, disability, appearance, skin colour or ethnicity and sexual orientation was conducted in the Dutch general population (n=4076). The main outcome was onset of psychotic symptoms (delusions and hallucinations).
The rate of delusional ideation was 0.5% (n=19) in those who did not report discrimination, 0.9% (n=4) in those who reported discrimination in one domain, and 2.7% (n=3) in those who reported discrimination in more than one domain (exact P=0.027). This association remained after adjustment for possible confounders. No association was found between baseline discrimination and onset of hallucinatory experiences.
Perceived discrimination may induce delusional ideation and thus contribute to the high observed rates of psychotic disorder in exposed minority populations. | CommonCrawl |
Organizers: A. Schlichting, S. Conti, H. Koch, S. Müller, B. Niethammer, M. Rumpf, C. Thiele, J.J.L. Velázquez
Thursday, March 23, 2:15 p.m., Lipschitz-Saal
Fulvio Ricci (Scuola Normale Superiore, Pisa)
A maximal restriction theorem and Lebesgue points of Fourier transforms in the plane
We present the results of recent joint work with Detlef Müller and James Wright. A Fourier restriction theorem relative to a given surface $S\subset \mathbb R^n$ provides $L^p-L^q$ inequalities for the operator $\mathcal R:f\in\mathcal S(\mathbb R^n)\longmapsto \hat f_{|_S}$. Despite the wide literature concerning range of validity and applications of restriction inequalities, not much has been said about the explicit relation, in presence of a $p-q$ restriction inequality, between the two functions $\hat f$ and $\mathcal R f$ for a general $f\in L^p(\mathbb R^n)$. We give a first partial answer for curves in the plane by analyzing the operator which assigns to $f\in L^p(\mathbb R^2)$, with $p<4/3$, a smoothened version of the strong maximal function of $\hat f$.
Thursday, April 20, 2:15 p.m., Lipschitz-Saal
Francesco Fanelli (Institut Camille Jordan, Université de Lyon)
Asymptotic behaviour of non-homogenous fluids in fast rotation
In this talk, we consider a class of singular perturbation problems for systems of PDEs related to the dynamics of geophysical fluids. We are interested here in effects due to both the non-homogeneity of the fluid and the Earth rotaton, and to their interplay. After a review of known results, we specilize on the 2-D density-dependent incompressible Navier-Stokes equations with Coriolis force: our goal is to characterize the asymptotic dynamics of weak solutions to this model, in the limit when the rotation becomes faster and faster.
We present two kinds of results (deeply different from each other, from a qualitative viewpoint), depending on whether the initial densities are small perturbations of a constant state or of a truly non-constant reference density. In the former case we prove that the system tends to a homogeneous Navier-Stokes system with an additional forcing term, which is due to density variations and which is a remainder of the action of the Coriolis force. In the latter case, instead, we show that the limit equations become linear, and moreover one can identify only a mean motion, in terms of the limit vorticity and the limit density fluctuation function; this issue can be interpreted as a sort of turbulent behaviour of the fluid in the limit of fast rotation.
This talk is based on a joint work with Isabelle Gallagher.
Thursday, April 27, 2:15 p.m., seminar room 0.008
Diogo Oliveira e Silva (University of Bonn)
Some recent progress on sharp Fourier restriction theory
It has long been understood that Strichartz estimates for the homogeneous Schrödinger equation correspond to adjoint restriction estimates on the paraboloid. The study of extremizers and sharp constants for the corresponding inequalities has a short but rich history. In this talk, I will summarize it briefly, and then specialize to the case of certain convex perturbations of the paraboloid. A geometric comparison principle for convolution measures can be used to establish the corresponding sharp Strichartz inequality, and to prove that extremizers do not exist. The mechanism underlying this lack of compactness is explained by the behaviour of extremizing sequences which will be described via concentration-compactness. Time permitting, I will show how this resolves a dichotomy from the recent literature concerning the existence of extremizers for a family of fourth order Schrödinger equations.
Thursday, May 4, 2:15 p.m., Lipschitz-Saal
Christian Seis (University of Bonn)
Optimal stability estimates for continuity equations
In this talk, I will review new stability estimates for continuity equations and compare those with previosuly known analogous estimates for Lagrangian flows. I will explain how the new results allow for a quantitive proof of well-posedness in the low regularity setting considered by DiPerna and Lions. The estimates are obtained for Kantorovich--Rubinstein distances with logarithmic cost functions and allow thus to quantify the order of weak convergence in several applications. I plan to conclude this talk with two examples: 1) A lower bound on mixing rates obtained by stirring two immiscible fluids. 2) An upper bound on convergence rates for numerical upwind schemes (obtained jointly with A. Schlichting).
Thurday, June 1, 2:15 p.m., seminar room 0.008
Charlotte Perrin (RWTH Aachen)
A macroscopic model for granular flows
I will present in this talk an original model for immersed granular flows which takes into account memory effects. In the first part I will justify these memory effects by means of a singular limit. It relies on recent analysis tools for the compressible Navier-Stokes equations. The second part of the talk will be dedicated to one-dimensional flows for which a direct Lagrangian approach can be developed.
Robert L. Pego (Carnegie Mellon University)
Lipschitz Lectures: Studies in dynamics, coherent structures and stability
Thursday, July 6, 2:15 p.m., seminar room 0.008
Simon Rösel (WIAS Berlin)
Density of convex intersections and applications
In a general framework, it is shown how density properties of intersections of convex sets naturally arise from the perturbation or dualization of constrained optimization and variational inequality problems. Several density results (and counterexamples) for closed convex sets with pointwise constraints in Sobolev spaces are presented. Diverse applications are provided, which include elasto-plasticity and image restoration problems. Finally, the results are further discussed in the context of Finite Element discretizations of sets associated to convex constraints.
Thursday, July 13, 2:15 p.m., Lipschitz-Saal
Ievgen Verbytskyi (National University of Ukraine)
A model of tissue border displacement in non-contact Photoacoustic Tomography
Photoacoustic tomography (PAT) is a relatively new imaging modality, which allows e.g. to visualize the vascular network in biological tissue noninvasively. This tomographic method has an advantage in comparison to pure optical/acoustical methods due to high optical contrast and low acoustic scattering in deep tissue. The common PAT methodology, based on measurements of the acoustic pressure by piezoelectric sensors placed on the tissue surface, limits its practical versatility. A novel, completely non-contact and full-field PAT system is described. In noncontact PAT the measurement of surface displacement induced by the acoustic pressure at the tissue/air border is researched. A model of the tissue displacement caused by medium pressure based on the momentum conservation law is proposed. Experimental data processing and simulation techniques are developed. The error of the displacement simulation in comparing with experimental data is calculated. | CommonCrawl |
Projects TRANSFORM Software Underground Hackathons SubSurfWiki Bruges geosci.ai
About Our team Articles Gallery Things we love Logos Press Releases
Software Underground
SubSurfWiki
geosci.ai
Creativity in geoscience
Views and news about geoscience and technology.
What is scientific computing?
January 10, 2018 / Matt Hall
I started my career in sequence stratigraphy, so I know a futile discussion about semantics when I see one. But humour me for a second.
As you may know, we offer a multi-day course on 'geocomputing'. Somebody just asked me: what is this mysterious, made-up-sounding discipline? Swiftly followed by: can you really teach people how to do computational geoscience in a few days? And then: can YOU really teach people anything??
Good questions
You can come at the same kind of question from different angles. For example, sometimes professional programmers get jumpy about programming courses and the whole "learn to code" movement. I think the objection is that programming is a profession, like other kinds of engineering, and no-one would dream of offering a 3-day course on, say, dentistry for beginners.
These concerns are valid, sort of.
No, you can't learn to be a computational scientist in 3 days. But you can make a start. A really good one at that.
And no, we're not programmers. But we're scientists who get things done with code. And we're here to help.
And definitely no, we're not trying to teach people to be software engineers. We want to see more computational geoscientists, which is a different thing entirely.
So what's geocomputing then?
Words seem inadequate for nuanced discussion. Let's instead use the language of ternary diagrams. Here's how I think 'scientific computing' stacks up against 'computer science' and 'software engineering'...
If you think these are confusing, just be glad I didn't go for tetrahedrons.
These are silly, of course. We could argue about them for hours I'm sure. Where would IT fit? ("It's all about the business" or something like that.) Where does Agile fit? (I've caricatured our journey, or tried to.) Where do you fit?
January 10, 2018 / Matt Hall/ 16 Comments
semantics, geocomputing, programming, computing
Matt is a geoscientist in Nova Scotia, Canada. Founder of Agile Scientific, co-founder of The HUB South Shore. Matt is into geology, geophysics, and machine learning.
Not getting hacked
November 23, 2017 / Matt Hall
This kind of password is horrible for lots of reasons. The real solution to password madness is a password manager.
The end of the year is a great time to look around at your life and sort stuff out. One of the things you almost certainly need to sort out is your online security. Because if you haven't been hacked already (you probably have), you're just about to be.
Just look at some recent stories from the world of data security:
Yesterday, it emerged that Uber concealed a hack that exposed data of 57 million users
Last month, we learned that when Yahoo said 1 billion accounts had been compromised in August 2013... it was wrong. It was 3 billion. In other words: all of their user accounts.
On 29 July, hackers stole 143 million account details, seriously compromising hundreds of thousands of people.
There are plenty of others; Wired has been keeping track of them — read more here. Or check out Wikipedia's list.
Despite all this, I see hardly anyone using a password manager, and anecdotally I hear that hardly anyone uses two-factor authentication either. This tells me that at least 80% of smart people, inlcuding lots of my friends and relatives, are in daily peril. Oh no!
After reading this post, I hope you do two things:
Start using a password manager. If you only do one thing, do this.
Turn on two-factor authentication for your most vulnerable accounts.
Start using a password manager
Please, right now, download and install LastPass on every device and in every browser you use. It's awesome:
It stores all your passwords! This way, they can all be different, and each one can be highly secure.
It generates secure, random passwords for new accounts you create.
It scores you on the security level of your passwords, and lets you easily change insecure ones.
The free version is awesome, and the premium version is only $2/month.
There are other password managers, of course, but I've used this one for years and it's excellent. Once you're set up, you can start changing passwords that are insecure, or re-used on multiple sites... or which are at Uber, Yahoo, or Equifax.
One surprise from using LastPass is being able to count the number of accounts I have created around the web over the years. I have 473 accounts stored in LastPass! That's 473 places to get hacked... how many places are you exposed?
The one catch: you need a bulletproof key for your password manager. Best advice: use a long pass-phrase instead.
The obligatory password cartoon, by xkcd and licensed CC-BY-NC
Sure, it's belt and braces — but you don't want your security trousers to fall down, right?
Er, anyway, the point is that even with a secure password, your password can still be stolen and your account compromised. But it's much, much harder if you use two-factor authentication, aka 2FA. This requires you to enter a code — from a hardware key or an app, or received via SMS — as well as your password. If you use an app, it introduces still another layer of security, because your phone should be locked.
I use Google's Authenticator app, and I like it. There's a little bit of hassle the first time you set it up, but after that it's plain sailing. I have 2FA turned on for all my 'high risk' accounts: Google, Twitter, Facebook, Apple, AWS, my credit card processor, my accounting software, my bank, my domain name provider, GitHub, and of course LastPass. Indeed, LastPass even lets me specify that logins must originate in Canada.
There are some other easy things you can do to make yourself less hackable:
Install updates on your phones, tablets, and other computers. Keep browsers and operating systems up to date.
Be on high alert for phishing attempts. Don't follow links to sites like your bank or social media sites — type them into your browser if possible. Be very suspicious of anyone contacting you, especially banks.
Don't use USB sticks. The cloud is much safer — I use Dropbox myself, it's awesome.
For more tips, check out this excellent article from Motherboard on not getting hacked.
November 23, 2017 / Matt Hall/ 4 Comments
security, computing, tips
The norm and simple solutions
October 19, 2017 / Matt Hall
Last time I wrote about different ways of calculating distance in a vector space — say, a two-dimensional Euclidean plane like the streets of Portland, Oregon. I showed three ways to reckon the distance, or norm, between two points (i.e. vectors). As a reminder, using the distance between points u and v on the map below this time:
$$ \|\mathbf{u} - \mathbf{v}\|_1 = |u_x - v_x| + |u_y - v_y| $$
$$ \|\mathbf{u} - \mathbf{v}\|_2 = \sqrt{(u_x - v_x)^2 + (u_y - v_y)^2} $$
$$ \|\mathbf{u} - \mathbf{v}\|_\infty = \mathrm{max}(|u_x - v_x|, |u_y - v_y|) $$
Let's think about all the other points on Portland's streets that are the same distance away from u as v is. Again, we have to think about what we mean by distance. If we're walking, or taking a cab, we'll need to think about \(\ell_1\) — the sum of the distances in x and y. This is shown on the left-most map, below.
For simplicity, imagine u is the origin, or (0, 0) in Cartesian coordinates. Then v is (0, 4). The sum of the distances is 4. Looking for points with the same sum, we find the pink points on the map.
If we're thinking about how the crow flies, or \(\ell_2\) norm, then the middle map sums up the situation: the pink points are all equidistant from u. All good: this is what we usually think of as 'distance'.
The \(\ell_\infty\) norm, on the other hand, only cares about the maximum distance in any direction, or the maximum element in the vector. So all points whose maximum coordinate is 4 meet the criterion: (1, 4), (2, 4), (4, 3) and (4, 0) all work.
You might remember there was also a weird definition for the \(\ell_0\) norm, which basically just counts the non-zero elements of the vector. So, again treating u as the origin for simplicity, we're looking for all the points that, like v, have only one non-zero Cartesian coordinate. These points form an upright cross, like a + sign (right).
So there you have it: four ways to draw a circle.
A circle is just a set of points that are equidistant from the centre. So, depending on how you define distance, the shapes above are all 'circles'. In particular, if we normalize the (u, v) distance as 1, we have the following unit circles:
It turns out we can define any number of norms (if you like the sound of \(\ell_{2.4}\) or \(\ell_{240}\) or \(\ell_{0.024}\)... but most of the time, these will suffice. You can probably imagine the shapes of the unit circles defined by these other norms.
What can we do with this stuff?
Let's think about solving equations. Think about solving this:
$$ x + 2y = 8 $$
I'm sure you can come up with a soluiton in your head, x = 6 and y = 1 maybe. But one equation and two unknowns means that this problem is underdetermined, and consequently has an infinite number of solutions. The solutions can be visualized geometrically as a line in the Euclidean plane (right).
But let's say I don't want solutions like (3.141590, 2.429205) or (2742, –1367). Let's say I want the simplest solution. What's the simplest solution?
This is a reasonable question, but how we answer it depends how we define 'simple'. One way is to ask for the nearest solution to the origin. Also reasonable... but remember that we have a few different ways to define 'nearest'. Let's start with the everyday definition: the shortest crow-flies distance from the origin. The crow-flies, \(\ell_2\) distances all lie on a circle, so you can imagine starting with a tiny circle at the origin, and 'inflating' it until it touches the line \(x + 2y - 8 = 0\). This is usually called the minimum norm solution, minimized on \(\ell_2\). We can find it in Python like so:
import numpy.linalg as la
A = [[1, 2]]
b = [8]
la.lstsq(A, b)
The result is the vector (1.6, 3.2). You could almost have worked that out in your head, but imagine having 1000 equations to solve and you start to appreciate numpy.linalg. Admittedly, it's even easier in Octave (or MATLAB if you must) and Julia:
A = [1 2]
A \ b
But remember we have lots of norms. It turns out that minimizing other norms can be really useful. For example, minimizing the \(\ell_1\) norm — growing a diamond out from the origin — results in (0, 4). The \(\ell_0\) norm gives the same sparse* result. Minimizing the \(\ell_\infty\) norm leads to \( x = y = 8/3 \approx 2.67\).
This was the diagram I wanted to get to when I started with the 'how far away is the supermarket' business. So I think I'll stop now... have fun with Norm!
* I won't get into sparsity now, but it's a big deal. People doing big computations are always looking for sparse representations of things. They use less memory, are less expensive to compute with, and are conceptually 'neater'. Sparsity is really important in compressed sensing, which has been a bit of a buzzword in geophysics lately.
October 19, 2017 / Matt Hall/ Comment
Science, Machine Learning
mathematics, linear algebra, Python, computing
The norm: kings, crows and taxicabs
How far away is the supermarket from your house? There are lots of ways of answering this question:
As the crow flies. This is the green line from \(\mathbf{a}\) to \(\mathbf{b}\) on the map below.
The 'city block' driving distance. If you live on a grid of streets, all possible routes are the same length — represented by the orange lines on the map below.
In time, not distance. This is usually a more useful answer... but not one we're going to discuss today.
Don't worry about the mathematical notation on this map just yet. The point is that there's more than one way to think about the distance between two points, or indeed any measure of 'size'.
Higher dimensions
The map is obviously two-dimensional, but it's fairly easy to conceive of 'size' in any number of dimensions. This is important, because we often deal with more than the 2 dimensions on a map, or even the 3 dimensions of a seismic stack. For example, we think of raw so-called 3D seismic data as having 5 dimensions (x position, y position, offset, time, and azimuth). We might even formulate a machine learning task with a hundred or more dimensions (or 'features').
Why do we care about measuring distances in high dimensions? When we're dealing with data in these high-dimensional spaces, 'distance' is a useful way to measure the similarity between two points. For example, I might want to select those samples that are close to a particular point of interest. Or, from among the points satisfying some constraint, select the one that's closest to the origin.
Definitions and nomenclature
We'll define norms in the context of linear algebra, which is the study of vector spaces (think of multi-dimensional 'data spaces' like the 5D space of seismic data). A norm is a function that assigns a positive scalar size to a vector \(\mathbf{v}\) , with a size of zero reserved for the zero vector (in the Cartesian plane, the zero vector has coordinates (0, 0) and is usually called the origin). Any norm \(\|\mathbf{v}\|\) of this vector satisfies the following conditions:
Absolutely homogenous. The norm of \(\alpha\mathbf{v}\) is equal to \(|\alpha|\) times the norm of \(\mathbf{v}\).
Subadditive. The norm of \( (\mathbf{u} + \mathbf{v}) \) is less than or equal to the norm of \(\mathbf{u}\) plus the norm of \(\mathbf{v}\). In other words, the norm satisfies the triangle inequality.
Positive. The first two conditions imply that the norm is non-negative.
Definite. Only the zero vector has a norm of 0.
Kings, crows and taxicabs
Let's return to the point about lots of ways to define distance. We'll start with the most familiar definition of distance on a map— the Euclidean distance, aka the \(\ell_2\) or \(L_2\) norm (confusingly, sometimes the two is written as a superscript), the 2-norm, or sometimes just 'the norm' (who says maths has too much jargon?). This is the 'as-the-crow-flies distance' on the map above, and we can calculate it using Pythagoras:
$$ \|\mathbf{v}\|_2 = \sqrt{(a_x - b_x)^2 + (a_y - b_y)^2} $$
You can extend this to an arbitrary number of dimensions, just keep adding the squared elementwise differences. We can also calculate the norm of a single vector in n-space, which is really just the distance between the origin and the vector:
$$ \|\mathbf{u}\|_2 = \sqrt{u_1^2 + u_2^2 + \ldots + u_n^2} = \sqrt{\mathbf{u} \cdot \mathbf{u}} $$
As shown here, the 2-norm of a vector is the square root of its dot product with itself.
So the crow-flies distance is fairly intuitive... what about that awkward city block distance? This is usually referred to as the Manhattan distance, the taxicab distance, the \(\ell_1\) or \(L_1\) norm, or the 1-norm. As you can see on the map, it's just the sum of the absolute distances in each dimension, x and y in our case:
$$ \|\mathbf{v}\|_1 = |a_x - b_x| + |a_y - b_y| $$
What's this magic number 1 all about? It turns out that the distance metric can be generalized as the so-called p-norm, where p can take any positive value up to infinity. The definition of the p-norm is consistent with the two norms we just met:
$$ \| \mathbf{u} \|_p = \left( \sum_{i=1}^n | u_i | ^p \right)^{1/p} $$
In practice, I've only ever seen p = 1, 2, or infinity (and 0, but we'll get to that). Let's look at the meaning of the \(\infty\)-norm, aka the \(\ell_\infty\) or \(L_\infty\) norm, which is sometimes called the Chebyshev distance or chessboard distance (because it defines the minimum number of moves for a king to any given square):
$$ \|\mathbf{v}\|_\infty = \mathrm{max}(|a_x - b_x|, |a_y - b_y|) $$
In other words, the Chebyshev distance is simply the maximum element in a given vector. In a nutshell, the infinitieth root of the sum of a bunch of numbers raised to the infinitieth power, is the same as the infinitieth root of the largest of those numbers raised to the infinitieth power — because infinity is weird like that.
What about p = 0?
Infinity is weird, but so is zero sometimes. Taking the zeroeth root of a lot of ones doesn't make a lot of sense, so mathematicians often redefine the \(\ell_0\) or \(L_0\) "norm" (not a true norm) as a simple count of the number of non-zero elements in a vector. In other words, we toss out the 0th root, define \(0^0 := 0 \) and do:
$$ \| \mathbf{u} \|_0 = |u_1|^0 + |u_2|^0 + \cdots + |u_n|^0 $$
(Or, if we're thinking about the points \(\mathbf{a}\) and \(\mathbf{b}\) again, just remember that \(\mathbf{v}\) = \(\mathbf{a}\) - \(\mathbf{b}\).)
Computing norms
Let's take a quick look at computing the norm of some vectors in Python:
>>> import numpy as np
>>> a = np.array([1, 1]).T
>>> b = np.array([6, 5]).T
>>> L_0 = np.count_nonzero(a - b)
>>> L_1 = np.sum(np.abs(a - b))
>>> L_2 = np.sqrt((a - b) @ (a - b))
>>> L_inf = np.max(np.abs(a - b))
>>> # Using NumPy's `linalg` module:
>>> import numpy.linalg as la
>>> for p in (0, 1, 2, np.inf):
>>> print("L_{} norm = {}".format(p, la.norm(a - b, p)))
L_0 norm = 2.0
L_2 norm = 6.4031242374328485
L_inf norm = 5.0
What can we do with all this?
So far, so good. But what's the point of these metrics? How can we use them to solve problems? We'll get into that in a future post, so don't go too far!
For now I'll leave you to play with this little interactive demo of the effect of changing p-norms on a Voronoi triangle tiling — it's by Sarah Greer, a geophysics student at UT Austin.
Machine learning and analytics in geoscience
June 14, 2017 / Matt Hall
We're at EAGE in Paris. I'm sitting in a corner of the exhibition because the power is out in the main hall, so all the talks for the afternoon have been postponed. The poor EAGE team must be beside themselves, I feel for them. (Note to future event organizers: white boards!)
Yesterday Diego, Evan, and I — along with lots of hackathon participants — were at the Data Science for Geosciences workshop, an all-day machine learning fest. The session was chaired by Cyril Agut (Total), Marianne Cuif-Sjostrand (Total), Florence Delprat-Jannaud (IFPEN), and Noalwenn Dubos-Sallée (IFPEN), and they had assembled a good programme, with quite a bit of variety.
Michel Lutz, Group Data Officer at Total, and adjunct at École des Mines de Saint-Étienne, gave a talk entitled, Data science & application to geosciences: an introduction. It was high-level but thoughtful, and such glimpses into large companies are always interesting. The company seems to have a mature data science strategy, and a well-developed technology stack. Henri Blondelle (AgileDD) asked about open data at the end, and Michel somewhat sidestepped on specifics, but at least conceded that the company could do more in open source code, if not data.
Infrastructure, big data, and IoT
Next we heard a set of talks about the infrastructure aspect of big (really big) data.
Alan Smith of Luchelan told the group about some negative experiences with Hadoop and seismic data (though it didn't seem to me that his problems were insoluble since I know of several projects that use it), and the realization that sometimes you just need fast infrastructure and custom software.
Hadi Jamali-Rad of Shell followed with an IoT story from the field. He had deployed a large number of wireless seismic sensors around a village in Holland, then tested various aspects of the communication system to answer questions like, what's the packet loss rate when you collect data from the nodes? What about from a balloon stationed over the site?
Duncan Irving of Teradata asked, Why aren't we [in geoscience] doing live analytics on 100PB of live data like eBay? His hypothesis is that IT organizations in oil and gas failed to keep up with key developments in data analytics, so now there's a crisis of sorts and we need to change how we handle our processes and culture around big data.
We shifted gears a bit after lunch. I started with a characteristically meta talk about how I think our community can help ensure that our research and practice in this domain leads to good places as soon as possible. I'll record it and post it soon.
Nicolas Audebert of ONERA/IRISA presented a nice application of a 3D convolutional neural network (CNN) to the segmentation and classification of hyperspectral aerial photography. His images have between about 100 and 400 channels, and he finds that CNNs reduce error rates by up to about 50% (compared to an SVM) on noisy or complex images.
Henri Blondelle of Agile Data Decisions talked about his experience of the CDA's unstructured data challenge of 2016. About 80% of the dataset is unstructured (e.g. folders of PDFs and TIFFs), and Henri's vision is to transform 80% of that into structured data, using tools like AgileDD's IQC to do OCR and heuristic labeling.
Irina Emelyanova of CSIRO provided another case study: unsupervised e-facies prediction using various types of clustering, from K-means to some interesting variants of self-organizing maps. It was refreshing to see someone revealing a lot of the details of their implementation.
Jan Limbeck, a research scientist at Shell wrapped up the session with an overview of Shell's activities around big data and machine learning, as they prepare for exabytes. He mentioned the Mauricio Araya-Polo et al. paper on deep learning in seismic shot gathers in the special March issue of The Leading Edge — clearly it's easiest to talk about things they've already published. He also listed a lot of Shell's machine learning projects (frac optimization, knowledge graphs, reservoir simulation, etc), but there's no way to know what state they are in or what their chances of success are.
As well as all the 9 talks, there were 13 posters, about a third of which were on infrastructure stuff, with the rest providing more case studies. Unfortunately, I didn't get the chance to look at them in any detail, but I appreciated the organizers making time for discussion around the posters. If they'd also allowed more physical space for the discussion it could have been awesome.
Analytics!
After hearing about Mentimeter from Chris Jackson I took the opportunity to try it out on the audience. Here are the results, I think they are fairly self-explanatory...
I also threw in the mindmap I drew at the end as a sort of summary. The vertical axis represents something like'abstraction' or 'time' (in a workflow sense) and I think each layer depends somewhat on those beneath it. It probably makes sense to no-one but me.
Breakout!
It seems clear that 2017 is the breakout year for machine learning in petroleum geoscience, and in petroleum in general. If your company or institution has not yet gone beyond "watching" or "thinking about" data science and machine learning, then it is falling behind by a little more every day, and it has been for at least a year. Now's the time to choose if you want to be part of what happens next, or a victim of it.
June 14, 2017 / Matt Hall/ 3 Comments
Machine Learning, Science
machine learning, computing, programming, conferences, EAGE2017, Paris
The Computer History Museum
April 24, 2017 / Matt Hall
Mountain View, California, looking northeast over US 101 and San Francisco Bay. The Computer History Museum sits between the Googleplex and NASA Ames. Hangar 1, the giant airship hangar, is visible on the right of the image. Imagery and map data © Google, Landsat/Copernicus.
A few days ago I was lucky enough to have a client meeting in Santa Clara, California. I had not been to Silicon Valley before, and it was more than a little exciting to drive down US Route 101 past the offices of Google, Oracle and Amazon and basically every other tech company, marvelling at Intel's factory and the hangars at NASA Ames, and seeing signs to places like Stanford, Mountain View, and Menlo Park.
I had a spare day before I flew home, and decided to visit Stanford's legendary geophysics department, where there was a lecture that day. With an hour or so to kill, I thought I'd take in the Computer History Museum on the way… I never made it to Stanford.
The Computer History Museum was founded in 1996, building on an ambition of über-geek Gordon Bell. It sits in the heart of Mountain View, surrounded by the Googleplex, NASA Ames, and Microsoft. It's a modern, airy building with the museum and a small café downstairs, and meeting facilities on the upper floor. It turns out to be an easy place to burn four hours.
I saw a lot of computers that day. You can see them too because much of the collection is in the online catalog. A few things that stood out for me were:
The analog computers of the pre-digital era, especially the beautiful Nordsieck Differential Analyzer.
The actual, fully working, room-sized IBM 1401. Picture below.
A wall of home computers from the early 1980s, including a BBC Micro and a ZX Spectrum, both important machines to me. The display even evoked the smell of those machines... and the frustration of reading programs from audio tapes.
An Enigma machine, an Apple I, and Google's actual server (just one rack!) from 1999, complete with its sagging motherboards.
No seismic
I had been hoping to read more about the early days of Texas Instruments, because it was spun out of a seismic company, Geophysical Service or GSI, and at least some of their early integrated circuit research was driven by the needs of seismic imaging. But I was surprised not to find a single mention of seismic processing in the place. We should help them fix this!
A replica of a 1968 game console. Nice finish!
1981 all over again.
You can't beat the aesthetic of early computers.
IBM's monolithic SAGE radar system.
Gubbins.
Programming then was not like programming today.
An electronic analog computer, the Nordsieck Differential Analyzer (1950).
April 24, 2017 / Matt Hall/ 3 Comments
history, computing, geeky, travel
SEG-Y Rev 2 again: little-endian is legal!
March 31, 2017 / Matt Hall
Big news! Little-endian byte order is finally legal in SEG-Y files.
That's not all. I already spilled the beans on 64-bit floats. You can now have up to 18 quintillion traces (18 exatraces?) in a seismic line. And, finally, the hyphen confusion is cleared up: it's 'SEG-Y', with a hyphen. All this is spelled out in the new SEG-Y specification, Revision 2.0, which was officially released yesterday after at least five years in the making. Congratulations to Jill Lewis, Rune Hagelund, Stewart Levin, and the rest of the SEG Technical Standards Committee.
Back up a sec: what's an endian?
Whenever you have to think about the order of bytes (the 8-bit chunks in a 'word' of 32 bits, for example) — for instance when you send data down a wire, or store bytes in memory, or in a file on disk — you have to decide if you're Roman Catholic or Church of England.
It's not really about religion. It's about eggs.
In one of the more obscure satirical analogies in English literature, Jonathan Swift wrote about the ideological tussle between between two factions of Lilliputians in Gulliver's Travels (1726). The Big-Endians liked to break their eggs at the big end, while the Little-Endians preferred the pointier option. Chaos ensued.
Two hundred and fifty years later, Danny Cohen borrowed the terminology in his 1 April 1980 paper, On Holy Wars and a Plea for Peace — in which he positioned the Big-Endians, preferring to store the big bytes first in memory, against the Little-Endians, who naturally prefer to store the little ones first. Big bytes first is how the Internet shuttles data around, so big-endian is sometimes called network byte order. The drawing (right) shows how the 4 bytes in a 32-bit 'word' (the hexadecimal codes 0A, 0B, 0C and 0D) sit in memory.
Because we write ordinary numbers big-endian style — 2017 has the thousands first, the units last — big-endian might seem intuitive. Then again, lots of people write dates as, say, 31-03-2017, which is analogous to little-endian order. Cohen reviews the computational considerations in his paper, but really these are just conventions. Some architectures pick one, some pick the other. It just happens that the x86 architecture that powers most desktop and laptop computers is little-endian, so people have been illegally (and often accidentally) writing little-endian SEG-Y files for ages. Now it's actually allowed.
Still other byte orders are possible. Some processors, notably ARM and other RISC architectures, are middle-endian (aka mixed endian or bi-endian). You can think of this as analogous to the month-first American date format: 03-31-2017. For example, the two halves of a 32-bit word might be reversed compared to their 'pure' endian order. I guess this is like breaking your boiled egg in the middle. Swift did not tell us which religious denomination these hapless folks subscribe to.
OK, that's enough about byte order
I agree. So I'll end with this handy SEG-Y cheatsheet. Click here for the PDF.
References and acknowledgments
Cohen, Danny (April 1, 1980). On Holy Wars and a Plea for Peace. IETF. IEN 137. "...which bit should travel first, the bit from the little end of the word, or the bit from the big end of the word? The followers of the former approach are called the Little-Endians, and the followers of the latter are called the Big-Endians." Also published at IEEE Computer, October 1981 issue.
Thumbnail image: "Remember, people will judge you by your actions, not your intentions. You may have a heart of gold -- but so does a hard-boiled egg." by Kate Ter Haar is licensed under CC BY 2.0
March 31, 2017 / Matt Hall/ 9 Comments
seismic, formats, standards, computing
More precise SEG-Y?
The impending SEG-Y Revision 2 release allows the use of double-precision floating point numbers. This news might leave some people thinking: "What?".
Integers and floats
In most computing environments, there are various kinds of number. The main two are integers and floating point numbers. Let's take a quick look at integers, or ints, first.
Integers can only represent round numbers: 0, 1, 2, 3, etc. They can have two main flavours: signed and unsigned, and various bit-depths, e.g. 8-bit, 16-bit, and so on. An 8-bit unsigned integer can have values between 0 and 255; signed ints go from -128 to +127 using a mathematical operation called two's complement.
As you might guess, floating point numbers, or floats, are used to represent all the other numbers — you know, numbers like 4.1 and –7.2346312 × 10¹³ — we need lots of those.
Floats in binary
OK, so we need to know about floats. To understand what double-precision means, we need to know how floats are represented in computers. In other words, how on earth can a binary number like 01000010011011001010110100010101 represent a floating point number?
It's fairly easy to understand how integers are stored in binary: the 8-bit binary number 01001101 is the integer 77 in decimal, or 4D in hexadecimal; 11111111 is 255 (base 10) or FF (base 16) if we're dealing with unsigned ints, or -1 decimal if we're in the two's complement realm of signed ints.
Clearly we can only represent a certain number of values with, say, 16 bits. This would give us 65 536 integers... but that's not enough dynamic range represent tiny or gigantic floats, not if we want any precision at all. So we have a bit of a paradox: we'd like to represent a huge range of numbers (down around the mass of an electron, say, and up to Avogadro's number), but with reasonably high precision, at least a few significant figures. This is where floating point representations come in.
Scientific notation, sort of
If you're thinking about scientific notation, you're thinking on the right lines. We raise some base (say, 10) to some integer exponent, and multiply by another integer (called the mantissa, or significand). That way, we can write a huge range of numbers with plenty of precision, using only two integers. So:
$$ 3.14159 = 314159 \times 10^{-5} \ \ \mathrm{and} \ \ 6.02214 \times 10^{23} = 602214 \times 10^{18} $$
If I have two bytes at my disposal (so-called 'half precision'), I could have an 8-bit int for the integer part, called the significand, and another 8-bit int for the exponent. Then we could have floats from \(0\) up to \(255 \times 10^{255}\). The range is pretty good, but clearly I need a way to get negative significands — maybe I could use one bit for the sign, and leave 7 bits for the exponent. I also need a way to get negative exponents — I could assign a bias of –64 to the exponent, so that 127 becomes 63 and an exponent of 0 becomes –64. More bits would help, and there are other ways to apportion the bits, and we can use tricks like assuming that the significand starts with a 1, storing only the fractional part and thereby saving a bit. Every bit counts!
IBM vs IEEE floats
The IBM float and IEEE 754-2008 specifications are just different ways of splitting up the bits in a floating point representation. Single-precision (32-bit) IBM floats differ from single-precision IEEE floats in two ways: they use 7 bits and a base of 16 for the exponent. In contrast, IEEE floats — which are used by essentially all modern computers — use 8 bits and base 2 (usually) for the exponent. The IEEE standard also defines not-a-numbers (NaNs), and positive and negative infinities, among other conveniences for computing.
In double-precision, IBM floats get 56 bits for the fraction of the significand, allowing greater precision. There are still only 7 bits for the exponent, so there's no change in the dynamic range. 64-bit IEEE floats, however, use 11 bits for the exponent, leaving 52 bits for the fraction of the significand. This scheme results in 15–17 sigificant figures in decimal numbers.
The diagram below shows how four bytes (0x42, 0x6C, 0xAD, 0x15) are interpreted under the two schemes. The results are quite different. Notice the extra bit for the exponent in the IEEE representation, and the different bases and biases.
A four-byte word, 426CAD16 (in hexadecimal), interpreted as an IBM float (top) and an IEEE float (bottom). Most scientists would care about this difference!
IBM, IEEE, and seismic
When SEG-Y was defined in 1975, there were only IBM floats — IEEE floats were not defined until 1985. The SEG allowed the use of IEEE floating-point numbers in Revision 1 (2002), and they are still allowed in the impending Revision 2 specification. This is important because most computers these days use IEEE float representations, so if you want to read or write IBM floats, you're going to need to do some work.
The floating-point format in a particular SEG-Y file should be indicated by a flag in bytes 3225–3226. A value of 0x01 indicates IBM floats, while 0x05 indicates IEEE floats. Then again, you can't believe everything you read in headers. And, unfortunately, you can't tell an IBM float just by looking at it. Meisinger (2004) wrote a nice article in CSEG Recorder about the perils of loading IBM as IEEE and vice versa — illustrated below. You should read it.
From Meisinger, D (2004). SEGY floating point confusion. CSEG Recorder 29(7). Available online.
I wrote this post by accident while writing about endianness, the main big change in the new SEG-Y revision. Stay tuned for that post! [Update: here it is!]
seismic, computing, formats
The quick green forsterite jumped over the lazy dolomite
The best-known pangram — a sentence containing every letter of the alphabet — is probably
"The quick brown fox jumped over the lazy dog."
There are lots of others of course. If you write like James Joyce, there are probably an infinite number of others. The point is to be short, and one of the shortest, with only 29 letters (!), even has a geological flavour:
"Sphinx of black quartz, judge my vow."
I know what you're thinking: Cool, but what's the shortest set of mineral names that uses all the letters of the alphabet? What logophiliac geologist would not wonder the same thing?
Well, we posed this question in the most recent "Riddle me this" segment on the Undersampled Radio podcast. This blog post is my solution.
The set cover problem
Finding pangrams in a list of words amounts to solving the classical set cover problem:
"Given a set of elements \(\{U_1, U_2,\ldots , U_n\}\) (called the 'universe') and a collection \(S\) of \(m\) sets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of \(S\) whose union equals (or 'covers') the universe."
Our universe is the alphabet, and our \(S\) is the list of \(m\) mineral names. There is a slight twist in our case: the set cover problem wants the smallest subset of \(S\) — the fewest members. But in this problem, I suspect there are several 4-word solutions (judging from my experiments), so I want the smallest total size of the members of the subset. That is, I want the fewest total letters in the solution.
The set cover problem was shown to be NP-complete in 1972. What does this mean? It means that it's easy to tell if you have an answer (do you have all the letters of the alphabet?), but the only way to arrive at a solution is — to oversimplify massively — by brute force. (If you're interested in this stuff, this edition of the BBC's In Our Time is one of the best intros to P vs NP and complexity theory that I know of.)
Anyway, the point is that if we find a better way than brute force to solve this problem, then we need to write a paper about it immediately, claim our prize, collect our turkey, then move to a sunny tax haven with good water and double-digit elevation.
So, this could take a while: there are over 95 billion ways to draw 3 words from my list of 4600 mineral names. If we need 4 minerals, there are 400 trillion combinations... and a quick calculation suggests that my laptop will take a little over 50 years to check all the combinations.
Can't we speed it up a bit?
Brute force is one thing, but we don't need to be brutish about it. Maybe we can think of some strategies to give ourselves a decent chance:
The list is alphabetically sorted, so randomize the list before searching. (I did this.)
Guess some 'useful' minerals and ensure that you get to them. (I did this too, with quartz.)
Check there are at least 26 letters in the candidate words, and (if it's only records we care about) no more than 44, because I have a solution with 45 letters (see below).
We could sort the list into word length order. That way we search shorter things first, so we should get shorter lists (which we want) earlier.
My solution does not depend much on Python's set type. Maybe we could do more with set theory.
Before inspecting the last word in each list, we could make sure it contains at least one letter that's so far missing.
So far, the best solution I've come up with so far has 45 letters, so there's plenty of room for improvement:
'quartz', 'kvanefjeldite', 'abswurmbachite', 'pyroxmangite'
My solution is in this Jupyter Notebook. Please put me out of my misery by improving on it.
March 16, 2017 / Matt Hall/ Comment
computing, mathematics, words, puzzles, podcast
SEG machine learning contest: there's still time
December 13, 2016 / Matt Hall
Have you been looking for an excuse to find out what machine learning is all about? Or maybe learn a bit of Python programming language? If so, you need to check out Brendon Hall's tutorial in the October issue of The Leading Edge. Entitled, "Facies classification using machine learning", it's a walk-through of a basic statistical learning workflow, applied to a small dataset from the Hugoton gas field in Kansas, USA.
But it was also the launch of a strictly fun contest to see who can get the best prediction from the available data. The rules are spelled out in ther contest's README, but in a nutshell, you can use any reproducible workflow you like in Python, R, Julia or Lua, and you must disclose the complete workflow. The idea is that contestants can learn from each other.
Left: crossplots and histograms of wireline log data, coloured by facies — the idea is to highlight possible data issues, such as highly correlated features. Right: true facies (left) and predicted facies (right) in a validation plot. See the rest of the paper for details.
The task at hand is to predict sedimentological facies from well logs. Such log-derived facies are sometimes called e-facies. This is a familiar task to many development geoscientists, and there are many, many ways to go about it. In the article, Brendon trains a support vector machine to discriminate between facies. It does a fair job, but the accuracy of the result is less than 50%. The challenge of the contest is to do better.
Indeed, people have already done better; here are the current standings:
1 gccrowther 0.580 Random forest Python Notebook
2 LA_Team 0.568 DNN Python Notebook
3 gganssle 0.561 DNN Lua Notebook
4 MandMs 0.552 SVM Python Notebook
5 thanish 0.551 Random forest R Notebook
6 geoLEARN 0.530 Random forest Python Notebook
7 CannedGeo 0.512 SVM Python Notebook
8 BrendonHall 0.412 SVM Python Initial score in article
As you can see, DNNs (deep neural networks) are, in keeping with the amazing recent advances in the problem-solving capability of this technology, doing very well on this task. Of the 'shallow' methods, random forests are quite prominent, and indeed are a great first-stop for classification problems as they tend to do quite well with little tuning.
There is still over 6 weeks to enter: you have until 31 January. There is a little overhead — you need to learn a bit about git and GitHub, there's some programming, and of course machine learning is a massive field to get up to speed on — but don't be discouraged. The very first entry was from Bryan Page, a self-described non-programmer who dusted off some basic skills to improve on Brendon's notebook. But you can run the notebook right here in mybinder.org (if it's up today — it's been a bit flaky lately) and a play around with a few parameters yourself.
The contest aspect is definitely low-key. There's no money on the line — just a goody bag of fun prizes and a shedload of kudos that will surely get the winners into some awesome geophysics parties. My hope is that it will encourage you (yes, you) to have fun playing with data and code, trying to do that magical thing: predict geology from geophysical data.
Hall, B (2016). Facies classification using machine learning. The Leading Edge 35 (10), 906–909. doi: 10.1190/tle35100906.1. (This paper is open access: you don't have to be an SEG member to read it.)
December 13, 2016 / Matt Hall/ 8 Comments
Science, Event, News
machine learning, computing, statistics, geocomputing, openness
Scientific computing for the subsurface, anywhere in the world.
Is your data digital or just pseudodigital?
Training digital scientists
TRANSFORM happened!
Feel superhuman: learning and teaching geocomputing
The order of stratigraphic sequences
Machine learning project review checklist
@kwinkunks on Twitter
What we've learned (and are still learning!) about teaching #Python to earth scientists >>> https://t.co/3IAc7AKE8f
You can't go wrong with a good rant about geophysics... https://t.co/nrboTbCCAB
RT @geophysichick: A Seismic Rant, by Geophysichick: Rather annoyed, as I prepare my class on seismic waves for tomorrow. Nearly ev… https://t.co/LyKIWCGCWh
© 2018 Agile Scientific • PO Box 336, Mahone Bay NS B0J 2E0, Canada • +1.902.980.0130 • [email protected]
Privacy policy • Except where noted, all of Agile's content is licensed CC-BY | CommonCrawl |
On oscillations to a 2D age-dependent predation equations characterizing Beddington-DeAngelis type schemes
Joelma Azevedo 1, , Juan Carlos Pozo 2, and Arlúcio Viana 3,,
Departamento de Matemática, Universidade de Pernambuco, Nazaré da Mata, Brazil
Departamento de Matemáticas, Facultad de Ciencias, Universidad de Chile, Santiago, Chile
Departamento de Matemática, Universidade Federal de Sergipe, São Cristóvão, Brazil
* Corresponding author: Arlúcio Viana
Received December 2020 Revised April 2021 Early access May 2021
This paper is devoted to the study of the global well-posedness for a non-local-in-time Navier-Stokes equation. Our results recover in particular other existing well-posedness results for the Navier-Stokes equations and their time-fractional version. We show the appropriate manner to apply Kato's strategy and this context, with initial conditions in the divergence-free Lebesgue space $ L^\sigma_d(\mathbb{R}^d) $. Temporal decay at $ 0 $ and $ \infty $ are obtained for the solution and its gradient.
Keywords: Nonlocal Navier-Stokes, PDEs in connection with fluid mechanics, well-posedness, long-time behavior, uniqueness.
Mathematics Subject Classification: Primary: 35R09, 35R11, 35Q35, 35Q30; Secondary: 33E12, 76D03, 35B30, 35B40.
Citation: Joelma Azevedo, Juan Carlos Pozo, Arlúcio Viana. Global solutions to the non-local Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021146
H. Amann, Existence and regularity for semilinear parabolic evolution equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 11 (1984), 593-676. Google Scholar
G. Amendola, M. Fabrizio and J. M. Golden, Thermodynamics of Materials with Memory, Springer, New York, 2012. doi: 10.1007/978-1-4614-1692-0. Google Scholar
V. Barbu and S. S. Sritharan, Navier-Stokes equation with hereditary viscosity, Z. Angew. Math. Phys., 54 (2003), 449-461. doi: 10.1007/s00033-003-1087-y. Google Scholar
M. Cannone, A generalization of a theorem by Kato on Navier-Stokes equations, Rev. Mat. Iberoamericana, 13 (1997), 515-541. doi: 10.4171/RMI/229. Google Scholar
R. Carlone, A. Fiorenza and L. Tentarelli, The action of Volterra integral operators with highly singular kernels on Hölder continuous, Lebesgue and Sobolev functions, J. Funct. Anal., 273 (2017), 1258-1294. doi: 10.1016/j.jfa.2017.04.013. Google Scholar
Ph. Clément and J. A. Nohel, Abstract linear and nonlinear Volterra equations preserving positivity, SIAM J. Math. Anal., 10 (1979), 365-388. doi: 10.1137/0510035. Google Scholar
Ph. Clément and J. A. Nohel, Asymptotic behavior of solutions of nonlinear Volterra equations with completely positive kernels, SIAM J. Math. Anal., 12 (1981), 514-535. doi: 10.1137/0512045. Google Scholar
P. M. de Carvalho-Neto and G. Planas, Mild solutions to the time fractional Navier-Stokes equations in $\Bbb{R}^N$, J. Differential Equations, 259 (2015), 2948-2980. doi: 10.1016/j.jde.2015.04.008. Google Scholar
Z. Z. Ganji, D. D. Ganji, D. Ammar and M. Rostamian, Analytical solution of time-fractional Navier-Stokes equation in polar coordinate by homotopy perturbation method, Numer. Methods Partial Differential Equations, 26 (2010), 117-124. doi: 10.1002/num.20420. Google Scholar
L. Grafakos, Classical and Modern Fourier Analysis, Pearson Education, Inc., Upper Saddle River, NJ, 2004. Google Scholar
T. Kato, Strong $L^{p}$-solutions of the Navier-Stokes equation in $\Bbb{R}^{m}$, with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar
J. Kemppainen, J. Siljander, V. Vergara and R. Zacher, Decay estimates for time-fractional and other non-local in time subdiffusion equations in $\Bbb{R}^d$, Math. Ann., 366 (2016), 941-979. doi: 10.1007/s00208-015-1356-z. Google Scholar
A. N. Kochubei, Distributed order calculus and equations of ultraslow diffusion, J. Math. Anal. Appl., 340 (2008), 252-281. doi: 10.1016/j.jmaa.2007.08.024. Google Scholar
T. Kodama and T. Koide, Memory effects and transport coefficients for non-Newtonian fluids, J. Phys. G: Nucl. Part. Phys., 36 (2009), 6 pp. doi: 10.1088/0954-3899/36/6/064063. Google Scholar
Q. Li, Y. Chen, Y. Huang and Y. Wang, Two-grid methods for semilinear time fractional reaction diffusion equations by expanded mixed finite element method, Appl. Numer. Math., 157 (2020), 38-54. doi: 10.1016/j.apnum.2020.05.024. Google Scholar
R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach, Phys. Rep., 339 (2000), 77 pp. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar
S. Momani and Z. Odibat, Analytical solution of a time-fractional Navier-Stokes equation by Adomian decomposition method, Appl. Math. Comput., 177 (2006), 488-494. doi: 10.1016/j.amc.2005.11.025. Google Scholar
L. Peng, Y. Zhou, B. Ahmad and A. Alsaedi, The Cauchy problem for fractional Navier-Stokes equations in Sobolev spaces, Chaos, Solitons Fractals, 102 (2017), 218-228. doi: 10.1016/j.chaos.2017.02.011. Google Scholar
J. C. Pozo and V. Vergara, Fundamental solutions and decay of fully non-local problems, Discrete Contin. Dyn. Syst., 39 (2019), 639-666. doi: 10.3934/dcds.2019026. Google Scholar
J. Prüss, Evolutionary Integral Equations and Applications, Monographs in Mathematics, Vol. 87, Birkhäuser, Verlag, Basel, 1993. doi: 10.1007/978-3-0348-8570-6. Google Scholar
Y. Wang and T. Liang, Mild solutions to the time fractional Navier-Stokes delay differential inclusions, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 3713-3740. doi: 10.3934/dcdsb.2018312. Google Scholar
L. Xu, T. Shen, X. Yang and J. Liang, Analysis of time fractional and space nonlocal stochastic incompressible Navier-Stokes equation driven by white noise, Comput. Math. Appl., 78 (2019), 1669-1680. doi: 10.1016/j.camwa.2018.12.022. Google Scholar
J. Xu, Z. Zhang and T. Caraballo, Mild solutions to time fractional stochastic 2d-stokes equations with bounded and unbounded delay, J. Dyn. Diff. Equat., (2019). doi: 10.1007/s10884-019-09809-3. Google Scholar
P. Xu, C. Zeng and J. Huang, Well-posedness of the time-space fractional stochastic Navier-Stokes equations driven by fractional Brownian motion, Math. Model. Nat. Phenom., 13 (2018), Paper No. 11, 18 pp. doi: 10.1051/mmnp/2018003. Google Scholar
J. Zhang and J. Wang, Numerical analysis for Navier-Stokes equations with time fractional derivatives, Appl. Math. Comput., 336 (2018), 481-489. doi: 10.1016/j.amc.2018.04.036. Google Scholar
R. Zheng and X. Jiang, Spectral methods for the time-fractional Navier-Stokes equation, Appl. Math. Lett., 91 (2019), 194-200. doi: 10.1016/j.aml.2018.12.018. Google Scholar
Y. Zhou and L. Peng, On the time-fractional Navier-Stokes equations, Comput. Math. Appl., 73 (2017), 874-891. doi: 10.1016/j.camwa.2016.03.026. Google Scholar
Y. Zhou and L. Peng, Weak solutions of the time-fractional Navier-Stokes equations and optimal control, Comput. Math. Appl., 73 (2017), 1016-1027. doi: 10.1016/j.camwa.2016.07.007. Google Scholar
Y. Zhou, L. Peng and Y. Huang, Existence and Hölder continuity of solutions for time-fractional Navier-Stokes equations, Math. Methods Appl. Sci., 41 (2018), 7830-7838. doi: 10.1002/mma.5245. Google Scholar
L. Peng, A. Debbouche and Y. Zhou, Existence and approximation of solutions for time-fractional Navier-Stokes equations, Math. Methods Appl. Sci., 41 (2018), 8973-8984. doi: 10.1002/mma.4779. Google Scholar
Y. Zhou, L. Peng, B. Ahmad, Ba shir and A. Alsaedi, Energy methods for fractional Navier-Stokes equations, Chaos, Solitons Fractals, 102 (2017), 78-85. doi: 10.1016/j.chaos.2017.03.053. Google Scholar
G. Zou, G. Lv and J.-L. Wu, Stochastic Navier-Stokes equations with Caputo derivative driven by fractional noises, J. Math. Anal. Appl., 461 (2018), 595-609. doi: 10.1016/j.jmaa.2018.01.027. Google Scholar
Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15 (1) : 219-241. doi: 10.3934/cpaa.2016.15.219
Igor Chueshov, Irena Lasiecka, Justin Webster. Flow-plate interactions: Well-posedness and long-time behavior. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 925-965. doi: 10.3934/dcdss.2014.7.925
Giulio Schimperna, Antonio Segatti, Ulisse Stefanelli. Well-posedness and long-time behavior for a class of doubly nonlinear equations. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 15-38. doi: 10.3934/dcds.2007.18.15
Daoyuan Fang, Ruizhao Zi. On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3517-3541. doi: 10.3934/dcds.2013.33.3517
Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations I: Local well-posedness. Evolution Equations & Control Theory, 2012, 1 (1) : 195-215. doi: 10.3934/eect.2012.1.195
Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5143-5151. doi: 10.3934/dcds.2013.33.5143
Jan Prüss, Vicente Vergara, Rico Zacher. Well-posedness and long-time behaviour for the non-isothermal Cahn-Hilliard equation with memory. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 625-647. doi: 10.3934/dcds.2010.26.625
Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934/cpaa.2021121
Myeongju Chae, Kyungkeun Kang, Jihoon Lee. Global well-posedness and long time behaviors of chemotaxis-fluid system modeling coral fertilization. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2135-2163. doi: 10.3934/dcds.2020109
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Bin Han, Changhua Wei. Global well-posedness for inhomogeneous Navier-Stokes equations with logarithmical hyper-dissipation. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 6921-6941. doi: 10.3934/dcds.2016101
Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35
Keyan Wang, Yao Xiao. Local well-posedness for Navier-Stokes equations with a class of ill-prepared initial data. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2987-3011. doi: 10.3934/dcds.2020158
Weimin Peng, Yi Zhou. Global well-posedness of axisymmetric Navier-Stokes equations with one slow variable. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3845-3856. doi: 10.3934/dcds.2016.36.3845
Yoshihiro Shibata. Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 315-342. doi: 10.3934/dcdss.2016.9.315
Jingjing Zhang, Ting Zhang. Local well-posedness of perturbed Navier-Stokes system around Landau solutions. Electronic Research Archive, 2021, 29 (4) : 2719-2739. doi: 10.3934/era.2021010
Haydi Israel. Well-posedness and long time behavior of an Allen-Cahn type equation. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2811-2827. doi: 10.3934/cpaa.2013.12.2811
Annalisa Iuorio, Stefano Melchionna. Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3765-3788. doi: 10.3934/dcds.2018163
Joelma Azevedo Juan Carlos Pozo Arlúcio Viana | CommonCrawl |
XOR - Is it possible to get a, b, c from a⊕b, b⊕c, a⊕c?
Just got a simple question from a friend; still thinking. Let's share it!
Is it possible to get a, b, c when you have a⊕b, b⊕c, a⊕c ?
where ⊕ is the boolean XOR (exclusive OR) operator.
a,b,c are any boolean numbers with the same length.
For the sake of simplicity, consider a,b,c as bytes (length is 8 bits).
Edit: You can use other operators too.
Edit 2: Though it's preferable to use only ⊕ (if possible) (removed, use any operator)
Note 1: Answer without explanations (e.g. yes/no) are not allowed.
Note 2: This is a puzzling programming, math-related question and could be placed on Puzzling.SE, Programming.SE, StackOverflow and Math.SE. So excuse me or migrate the question if I pasted it on the wrong SE forum.
mathematics computer-puzzle arithmetic computer-science
psmears
1,38211 gold badge55 silver badges99 bronze badges
JetJet
$\begingroup$ Can we use operators that aren't ⊕? $\endgroup$ – Ian MacDonald Sep 2 '15 at 13:51
$\begingroup$ Are a, b, c integers or booleans? $\endgroup$ – Ian MacDonald Sep 2 '15 at 13:55
$\begingroup$ One could also notice that if x=a⊕b, y=b⊕c, z=a⊕c, then z=x⊕y. Thus, in terms of information, the three values x, y, z contain only 2 bits of information and thus cannot determine 3 independent values. In terms of algebra, we could say that the set of three equations are linearly dependent and define a plane in the Boolean space Oabc instead of a point. $\endgroup$ – ach Sep 2 '15 at 17:50
$\begingroup$ @AndreyChernyakhovskiy in terms of my answer, this shows that the 3x3 system is not full-rank (mod 2). $\endgroup$ – asmeurer Sep 2 '15 at 20:29
$\begingroup$ @Spook: There's nothing in the definition of an operator that means it must cause data loss... for example, you could define an operator ⊞ on numbers considered as bit strings, where abc⊞def=abcdef (ie concatenation) - clearly this doesn't lose information; furthermore, if you're given a+b, b+c, c+a (i.e. normal addition) then you can recover a,b & c. Since XOR is equivalent to addition (mod 2), it's not unreasonable to assume the same would be true there - it just happens, as the answers show, to be false :) $\endgroup$ – psmears Sep 9 '15 at 6:59
If you're given just a⊕b and b⊕c, then you can calculate
(a⊕b) ⊕ (b⊕c)
= a ⊕ (b ⊕ b) ⊕ c (since ⊕ is associative)
= a ⊕ 0 ⊕ c (since X⊕X=0 for any X)
= a ⊕ c (since X⊕0=X for any X)
so in effect when you're given a⊕b, b⊕c, a⊕c you've only been given two numbers (because the last one is redundant). So (assuming a,b,c are 8 bits as in the question) you only have 16 bits of information, and hence you can't work out all of a,b and c since that would require 24 bits of information.
psmearspsmears
$\begingroup$ @Jet: It's impossible because any valid solution can have any specific bit inverted in all three to become another valid solution. $\endgroup$ – Deusovi♦ Sep 2 '15 at 15:53
$\begingroup$ @Jet: It's a very similar concept, yes. Basically there are 2^24=16777216 possibilities for what a,b,c can be, but only 2^16=65536 possibilities for what you'll get from a⊕b, b⊕c, a⊕c (since any one of those values is uniquely determined by the other two). So there must be combinations of a⊕b, b⊕c, a⊕c for which there are multiple solutions for a,b,c - and in fact there are exactly 256 for each. $\endgroup$ – psmears Sep 2 '15 at 20:01
$\begingroup$ @BolucPapuccuoglu: Whatever X you take, (X⊕0) is equal to X - because 0⊕0=0 and 1⊕0=1 - i.e. 0 is the identity for XOR. $\endgroup$ – psmears Sep 3 '15 at 9:35
$\begingroup$ +1 for working out how much information you're given. I quickly ran into a⊕b ⊕ b⊕c just giving me a⊕c which I already had, but didn't clue in to the fact that this meant there wasn't enough information to give a unique solution. $\endgroup$ – Peter Cordes Sep 3 '15 at 19:42
$\begingroup$ @miracle173: Of course you need associativity, that's why I mentioned it right there by the calculation :) I don't think it's really necessary to prove the well-known result that exclusive-or is associative here, just as most people don't prove the associativity of addition every time they use it; anyone needing that proof would be better served by another question. $\endgroup$ – psmears Sep 7 '15 at 7:20
This is not possible.
Consider the two cases where a, b and c are all true or all false. Now in both cases we have
a⊕b = b⊕c = a⊕c = false
And more generally, $(¬a)⊕(¬b)=a⊕b$, so if $a,b,c$ is a solution, then so is $¬a,¬b,¬c$. (from Klaus Draeger)
GOTO 0GOTO 0
$\begingroup$ Something something DeMorgan something... it's been a long time! $\endgroup$ – corsiKa Sep 2 '15 at 16:54
$\begingroup$ For mathy types: this function is not injective, so it is not invertible. $\endgroup$ – imallett Sep 2 '15 at 22:45
$\begingroup$ @KlausDraeger: Assuming 8 bits, ¬a is just a ⊕ 255. This points at the even more general observation that (a⊕x), (b⊕x), (c⊕x) is also a solution, for any x. $\endgroup$ – MSalters Sep 3 '15 at 9:34
Your question with the XOR operator mathematically reduces to:
In the $\mathbb{F}_{2^m}$ field (field of characteristic 2 whose elements can be represented as sequences of $m$ bits, and where addition is bitwise XOR), is the following matrix: $$ M = \left( \array{1\ 1\ 0\\1\ 0\ 1\\0\ 1\ 1}\right) $$ invertible ? Indeed, for a dimension-3 vector $V = (x, y, z)$, multiplying $V$ by the matrix above yields a vector $M·V = (x⊕y, x⊕z, y⊕z)$. For the operation to be reversible, the matrix $M$ must be invertible.
But that matrix is not invertible, because the third line is equal to the XOR of the two first lines. The conclusion is thus that it is not possible, in all generality, to recover $a$, $b$ et $c$ from $a⊕b$, $a⊕c$ and $b⊕c$.
We can add that the matrix, being non-invertible, has a non-trivial kernel: there is a subspace $K$ of $(\mathbb{F}_{2^m})^3$ that contains all vectors V such that $M·V = (0, 0, 0)$. The rank of $M$ is 2, meaning that the subspace $K$ has dimension 1; it is not hard to see that $K$ consists in exactly the vectors $(x, x, x)$ for all possible values of $x$. When you have a solution $S = (a, b, c)$ for your equation (the values $a$, $b$ and $c$ match the known values of $a⊕b$, $a⊕c$ and $b⊕c$), then the set of solutions is exactly $\left\{ S+V | V \in K \right\}$. In other words, when you have a solution, you have exactly $2^m$ solutions, and nothing to distinguish between them.
We can thus conclude that it is never possible to unambiguously reconstruct $a$, $b$ and $c$ from $a⊕b$, $a⊕c$ and $b⊕c$.
The same result extends to more than three values: you cannot unambiguously recover the $n$ values $a_0, a_1,… a_{n-1}$ from the set of $n$ values $a_i⊕a_{i+1}$ for all $i$ from $0$ to $n-1$ (taking $a_n = a_0$).
Thomas PorninThomas Pornin
$\begingroup$ Would the same be true of any additive group of integers congruent modulo an even number, but not for groups of integers congruent modulo odd numbers? $\endgroup$ – supercat Sep 3 '15 at 15:32
$\begingroup$ It is so frightening to see that you wrote the only answer explcitely mentioning linear algebra that I decided to register and upvote! $\endgroup$ – Michael Le Barbier Grünewald Sep 4 '15 at 15:06
$\begingroup$ -1 to much math for such a simple question. $\endgroup$ – miracle173 Sep 7 '15 at 5:57
$\begingroup$ @miracle Proofs by contradiction are usually horribly boring and non-constructive, this question being a nice example of that. Not sure why you have such a strong antipathy against simple math. $\endgroup$ – Voo Sep 8 '15 at 17:17
No, since,
A ⊕ B = B ⊕ C = A ⊕ C = 0
can be either
A = B = C = 0 or
A = B = C = 1
Rohcana
Tomer WTomer W
$\begingroup$ +1 The other answers explain many interesting related concepts, but as an answer to a puzzle, this is as simple as it gets. $\endgroup$ – JiK Sep 3 '15 at 13:46
(as an extension of GOTO 0's answer)
This is also not possible if you consider these as binary numbers, you cannot determine the values of a, b, or c. For instance:
a = 0b11001111010
b = 0b11001111101
c = 0b11001111001
All three of the xors for these binary representations compress the results down to at most three bits (the right-most three). Anywhere that all three numbers match bits will end up as 0, resulting in an inconclusive initial state since the matching bits are effectively "lost".
a⊕b = 0b111
b⊕c = 0b100
a⊕c = 0b011
Ian MacDonaldIan MacDonald
There are only a few cases, let's perform a quick inspection:
Let $$a⊕b=k_1$$ $$b⊕c=k_2$$ $$c⊕a=k_3$$ , we have
\begin{matrix} a & b & c & &k_1 & k_2 & k_3\\ -&-&-&-&-&-&-&\\ 0 & 0 & 0 & \rightarrow &0 & 0 & 0\\ 0 & 0 & 1 & \rightarrow &0 & 1 & 1\\ 0 & 1 & 0 & \rightarrow &1 & 1 & 0\\ 0 & 1 & 1 & \rightarrow &1 & 0 & 1\\ 1 & 0 & 0 & \rightarrow &1 & 0 & 1\\ 1 & 0 & 1 & \rightarrow &1 & 1 & 0\\ 1 & 1 & 0 & \rightarrow &0 & 1 & 1\\ 1 & 1 & 1 & \rightarrow &0 & 0 & 0 \end{matrix}
Hence only $$(k_1, k_2, k_3) \in \{(000), (011), (110), (101)\}$$ are solvable.
In addition, please note that:
The RHS of the table is symmetric.
All possible combination of $(k_1, k_2, k_3)$ have even number of '1's since summing the three equations: $$a⊕b=k_1$$ $$b⊕c=k_2$$ $$c⊕a=k_3$$ yields $$ k_1⊕k_2⊕k_3 =a⊕a⊕b⊕b⊕c⊕c=0$$
autodavidautodavid
$\begingroup$ Note that the second 4 are the same as the first four. That's why there are only 4 solvable cases. But even if there are 4 solvable cases, none of them has only one solution, each of them has two solutions, a,b,c and ¬a,¬b,¬c. So we can't define exact a,b,c if we are given k1,k2,k3 in these four cases, and we can say it's impossible to get a,b,c in other four cases. So anyway, it's not possible to exactly find one a,b,c in all cases. $\endgroup$ – Jet Sep 3 '15 at 6:55
$\begingroup$ In my opinion, this is the clearest answer. $\endgroup$ – Guntram Blohm Sep 4 '15 at 9:20
We can't get the precise values of $a$, $b$ and $c$, but we can determine them up to a constant.
More specifically, given any solution $(a, b, c)$ and any constant $C$, then $(a \oplus C, b \oplus C, c \oplus C)$ is also a solution, because the $C$'s cancel out when we XOR them. Furthermore, we can show that all solutions to the given system of equations can, in fact, be derived from a single solution in this way.
To show this, it's useful to generalize a bit. First of all, we note that the problem with three equations is simply a special case of the same problem with $n$ equations, where we're given $$\begin{aligned} a_2 \oplus a_1 &= b_1 \\ a_3 \oplus a_2 &= b_2 \\ &\ \ \vdots \\ a_n \oplus a_{n-1} &= b_{n-1} \\ a_1 \oplus a_n &= b_n \end{aligned}$$ with known $(b_1, \dotsc, b_n)$, and wish to solve for $(a_1, \dotsc, a_n)$.
Next, it's useful to note that bitwise XOR is equivalent to (vector) subtraction modulo 2 (and also, of course, to vector addition modulo 2, since those are the same thing; but the equivalence to subtraction gives a nicer generalization here). That is, we may generalize the problem to $$\begin{aligned} a_2 - a_1 &= b_1 \\ a_3 - a_2 &= b_2 \\ &\ \ \vdots \\ a_n - a_{n-1} &= b_{n-1} \\ a_1 - a_n &= b_n \end{aligned}$$ where the $a$'s and $b$'s are elements of an algebraic group* and $-$ is the subtraction operation defined by $x - y = x + (-y)$ (where $+$ is the group operation, and $-y$ is the inverse element of $y$). You can easily check that bitstrings indeed satisfy the definition of a group, with XOR as the group operation and each bitstring as its own inverse.
Why is this useful? Well, because as long as we're just doing addition (and subtraction, which is just addition of an inverse element), we can treat any such group elements just as if they were ordinary numbers, because (by definition) they obey the same algebraic rules. In particular, by adding $a_i$ to both sides of the $i$-th equation and simplifying, we can rearrange the equations above into the following equivalent form: $$\begin{aligned} a_2 &= b_1 + a_1 \\ a_3 &= b_2 + a_2 \\ &\ \ \vdots \\ a_n &= b_{n-1} + a_{n-1} \\ a_1 &= b_n + a_n. \end{aligned}$$
From this, we can see that, if we just pick some value for $a_1$, then we can immediately read out the values of $a_2, \dotsc, a_n$ from the first $n-1$ equations, like this: $$\begin{aligned} a_2 &= b_1 + a_1 \\ a_3 &= b_2 + b_1 + a_1 \\ &\ \ \vdots \\ a_{n-1} &= b_{n-2} + \dotsb + b_2 + b_1 + a_1 \\ a_n &= b_{n-1} + b_{n-2} + \dotsb + b_2 + b_1 + a_1 \end{aligned}$$
Thus, for each value of $a_1$, there can be (at most) one solution to these equations.
Whether or not the values of so obtained in fact are a solution then depends on the last equation, which needs to yield the original $a_1$ value. However, it turns out that this doesn't depend on which value we pick! In particular, substituting the value of $a_n$ calculated above into the last equation gives $$a_1 = b_n + b_{n-1} + b_{n-2} + \dotsb + b_2 + b_1 + a_1.$$
But now we can simply subtract $a_1$ from both sides to reduce this equation to: $$0 = b_n + b_{n-1} + b_{n-2} + \dotsb + b_2 + b_1.$$
So if this equation, which only contains the $b$ values, holds, then so will the previous one (for any $a_1$!), and so every choice of $a_1$ will yield a solution to the original equation. In fact, all these solutions will be of the form $a_i = a_i^0 + a_1$, where $a_i^0$ is the value of $a_i$ obtained by starting with $a_1 = 0$. And conversely, if the last equation above does not hold, then the original system of equations is inconsistent, and has no solution at all.
*) In fact, I didn't even need to assume that the group is abelian, i.e. that $x + y = y + x$, although for XOR this does certainly hold. In a non-abelian group you may not be able to freely reorder the terms in a sum, which can sometimes get awkward, but I didn't really have much need for that here, so I decided to go ahead and prove a slightly more general result without that assumption.
Ps. If you still remember indefinite integrals from high school math, you may recall that the solution $F$ to an integral equation like $F = \int f(x)\,dx$ is only defined up to a constant: if $F$ is a solution, then so is $F + C$ for any constant $C$. This is the exact same situation, just with discrete pairwise differences instead of differentials (and with XOR instead of ordinary subtraction).
Basically, if we're only given the differences between adjacent elements in a sequence, we cannot uniquely determine the original sequence without some way to fix the starting value. Having the sequence loop around in a circle doesn't change this; it just adds an extra constraint that all the differences must add up to zero for there to be any solution at all.
Ilmari KaronenIlmari Karonen
$\begingroup$ In the case of integration it somehow seems less useless that f(0) could be any value and there is a corresponding solution, than it seems in this case that a could be any value and there is a corresponding solution :-) $\endgroup$ – Steve Jessop Sep 4 '15 at 1:14
Remember that $\oplus$ is addition mod 2. So we wish to solve the system of equations
$$\begin{equation*} \begin{alignedat}{4} x & = & a & {}+{} & b & & \\ y & = & & & b & {}+{} & c \\ z & = & a & & & {}+{} & c \end{alignedat} \end{equation*}$$
Solving this over the real numbers we get the solution
$$\begin{equation*} \begin{alignedat}{4} a & = & \frac{x}{2} & {}-{} & \frac{y}{2} & {}+{} & \frac{z}{2} \\ b & = & \frac{x}{2} & {}+{} & \frac{y}{2} & {}-{} & \frac{z}{2} \\ c & = & -\frac{x}{2} & {}+{} & \frac{y}{2} & {}+{} & \frac{z}{2} \end{alignedat} \end{equation*}$$
We know by the theory of linear algebra that this solution is unique. However, this solution does not work in $\mathbb{Z}_2$, because we cannot divide by 2 in that field. Hence the system is not invertible.
asmeurerasmeurer
$\begingroup$ So because you can't calculate a solution with the method you have tried, there is no solution? That is a strange argumentation. Maybe you use the wrong method! $\endgroup$ – miracle173 Sep 7 '15 at 5:42
$\begingroup$ There's a reason I pointed out that the solution unique. $\endgroup$ – asmeurer Sep 7 '15 at 8:28
@psmears @GOTO 0 and @Ian MacDonald have already asked the basic question. This is another extension.
This question is interesting because operators lose information. The question fundamentally asks what is lost and what can be recovered.
It reminds me of a geometry trick:
Take any 4 points $A,B,C,D$ on a plane. Compute the middle points $E,F,G,H$, one between each in turn (the fourth middle point $H$ is between $D$ and $A$). Whatever the configuration of $ABCD$, $EFGH$ is always a parallelogram.
Computing the middle point is similar to addition here. The fact that $EFGH$ has a property that $ABCD$ did not have is typically a loss of information.
Back to XOR, note that with $a$, $a⊕b$ and $a⊕c$ you can recover everything.
There exist some applications using XOR with similar properties, see https://en.wikipedia.org/wiki/Fountain_code. The explanation there is a little general, but basically XOR combinations of a set of numbers (a, b and c, and more) are numerous and have the interesting property that if you receive only part of these combinations you can recover everything. This is useful for multicasting a signal to receivers on an unreliable medium.
From Fountain code - Wikipedia:
IETF RFC 5053 specifies in detail a systematic Raptor code, which has been adopted into multiple standards beyond the IETF, such as within the 3GPP MBMS standard for broadcast file delivery and streaming services, the DVB-H IPDC standard for delivering IP services over DVB networks, and DVB-IPTV for delivering commercial TV services over an IP network.
Stéphane GourichonStéphane Gourichon
$\begingroup$ Regarding the geometry trick, there's a proof on Midpoints of a quadrilateral form a parallelogram | math for love $\endgroup$ – Stéphane Gourichon Sep 4 '15 at 8:04
XOR is true if exactly one of the inputs is true. Which means, if you negate both inputs, XOR stays the same. This means you can't distinguish "a⊕b" from "(not a)⊕(not b)".
Say you have an algorithm to calculate a,b,c for given a⊕b, a⊕c and b⊕c. You use it and get a', b', c'. If someone else comes and says "no, the correct solution is actually (not a', not b', not c')", nobody can decide which is correct.
It doesn't matter if a,b,c are single-bit booleans or longer. The argument applies to each bit.
GullyGully
a⊕b ⊕ b⊕c == a⊕c So, as already noted, you have 2 equations with 3 unknowns, and this is not possible, in general.
However, this is a common scenario in cryptography. If b is the ciphertext, and a and c are two plaintexts, you can use a⊕b and b⊕c to get the XOR of two plaintexts, a⊕c. You can then use linguistic analysis, statistical methods, etc., to decipher a, c, and then b. But this relies on additional information about a and c.
jtpereydajtpereyda
I know there have been enough good answers already, but here is what I personally find to be the simplest:
No. We can easily list all possible ways to xor these three numbers:
a⊕b ⊕ a⊕c = b⊕c
a⊕b ⊕ b⊕c = a⊕c
a⊕c ⊕ b⊕c = a⊕b
x⊕x = 0 for any x
0⊕x = x for any x
This means that any expression involving these numbers and 0 combined with xor will always reduce to one of those numbers again (your three numbers and 0 are closed under xor). As a consequence, nothing else can be the value of any such expression, a, b and c are no exception.
Johannes GrieblerJohannes Griebler
Not the answer you're looking for? Browse other questions tagged mathematics computer-puzzle arithmetic computer-science or ask your own question.
A Quote from what Movie?
Get together to-get-her ;)
Determine all possible positive palindrome(s) numbers
Get 59 from 1,9,4,8 in order
From 2019 to digits
You surely don't want to score 100/100
From one color to another
Get 2 liters from 4 and 5 liter buckets
100 prisoners' names in boxes, against smart warden
XOR: Is it possible to get $a$ and $b$ if I have $a \oplus b$ and $a \times b$? | CommonCrawl |
4.3: Random Variables
[ "article:topic", "license:ccbysa", "authorname:japoritz" ]
Book: Lies, Damned Lies, or Statistics - How to Tell the Truth with Statistics (Poritz)
4: Probability Theory
Contributed by Jonathan A. Poritz
Associate Professor (Mathematics) at Colorado State University – Pueblo
Definition and First Examples
Distributions for Discrete RVs
Expectation for Discrete RVs
Density Functions for Continuous RVs
The Normal Distribution
Suppose we are doing a random experiment and there is some consequence of the result in which we are interested that can be measured by a number. The experiment might be playing a game of chance and the result could be how much you win or lose depending upon the outcome, or the experiment could be which part of the drives' manual you randomly choose to study and the result how many points we get on the driver's license test we make the next day, or the experiment might be giving a new drug to a random patient in medical study and the result would be some medical measurement you make after treatment (blood pressure, white blood cell count, whatever), etc. There is a name for this situation in mathematics
DEFINITION 4.3.1. A choice of a number for each outcome of a random experiment is called a random variable [RV]. If the values an RV takes can be counted, because they are either finite or countably infinite8 in number, the RV is called discrete; if, instead, the RV takes on all the values in an interval of real numbers, the RV is called continuous.
We usually use capital letters to denote RVs and the corresponding lowercase letter to indicate a particular numerical value the RV might have, like \(X\) and \(x\).
EXAMPLE 4.3.2. Suppose we play a silly game where you pay me $5 to play, then I flip a fair coin and I give you $10 if the coin comes up heads and $0 if it comes up tails. Then your net winnings, which would be +$5 or -$5 each time you play, are a random variable. Having only two possible values, this RV is certainly discrete.
EXAMPLE 4.3.3. Weather phenomena vary so much, due to such small effects – such as the famous butterfly flapping its wings in the Amazon rain forest causing a hurricane in North America – that they appear to be a random phenomenon. Therefore, observing the temperature at some weather station is a continuous random variable whose value can be any real number in some range like \(-100\) to \(100\) (we're doing science, so we use \({}^\circ C\)).
EXAMPLE 4.3.4. Suppose we look at the "roll two fair dice independently" experiment from Example 4.2.7 and Example 4.1.21, which was based on the probability model in Example 4.1.21 and sample space in Example 4.1.4. Let us consider in this situation the random variable \(X\) whose value for some pair of dice rolls is the sum of the two numbers showing on the dice. So, for example, \(X(11)=2\), \(X(12)=3\), etc.
In fact, let's make a table of all the values of \(X\): \[\begin{aligned} X(11) &= 2\\ X(21) = X(12) &= 3\\ X(31) = X(22) = X(13) &=4\\ X(41) = X(32) = X(23) = X(14) &= 5\\ X(51) = X(42) = X(33) = X(24) = X(15) &= 6\\ X(61) = X(52) = X(43) = X(34) = X(25) = X(16) &= 7\\ X(62) = X(53) = X(44) = X(35) = X(26) &= 8\\ X(63) = X(54) = X(45) = X(36) &= 9\\ X(64) = X(55) = X(46) &= 10\\ X(65) = X(56) &= 11\\ X(66) &= 12\\\end{aligned}\]
The first thing we do with a random variable, usually, is talk about the probabilities associate with it.
DEFINITION 4.3.5. Given a discrete RV \(X\), its distribution is a list of all of the values \(X\) takes on, together with the probability of it taking that value.
[Note this is quite similar to Definition 1.3.5 – because it is essentially the same thing.]
EXAMPLE 4.3.6. Let's look at the RV, which we will call \(X\), in the silly betting game of Example 4.3.2. As we noticed when we first defined that game, there are two possible values for this RV, $5 and -$5. We can actually think of "\(X=5\)" as describing an event, consisting of the set of all outcomes of the coin-flipping experiment which give you a net gain of $5. Likewise, "\(X=-5\)" describes the event consisting of the set of all outcomes which give you a net gain of -$5. These events are as follows:
\(\begin{array}{r|rl}{x} & {\text { Set of outcomes } o} \\ & {\text { such that } X(o)} & {=x} \\ \hline 5 & {\{H\}} \\ {-5} & {\{T\}}\end{array}\)
Since it is a fair coin so the probabilities of these events are known (and very simple), we conclude that the distribution of this RV is the table
\(\begin{array}{r|rl}{x} & P(X=x) \\ \hline 5 & \ 1/2\ \\ -5 & \ {1/2}\end{array}\)
EXAMPLE 4.3.7. What about the \(X=\text{''{\it sum of the face values}''}\) RV on the "roll two fair dice, independently" random experiment from Example 4.3.4? We have actually already done most of the work, finding out what values the RV can take and which outcomes cause each of those values. To summarize what we found:
\(\ \begin{array}{r|ll}{x} & {\text { Set of outcomes } o} \\ & {\text { such that } X(o)}\ {=x} \\ \hline 2 & {\{11\}} \\ {3} & {\{21,12\}} \\ 4 & \{31, 22, 13\}\ \\ 5 & \{41, 32, 23, 14\} \\6 & \{51, 42, 33, 24, 15\}\\7 & \{61, 52, 43, 34, 25, 16\}\\8 & \{62, 53, 44, 35, 26\}\\9 &\{63, 54, 45, 36\} \\10 & \{64, 55, 46\}\\11&\{65, 56\}\\12&\{66\} \end{array}\)
But we have seen that this is an equiprobable situation, where the probability of any event \(A\) contain \(n\) outcomes is \(P(A)=n\cdot1/36\), so we can instantly fill in the distribution table for this RV as
\(\ \begin{array}{r|ll}{x} & P(X=x) \\ \hline 2 & \frac{1}{36} \\ {3} & \frac{2}{36} = \frac{1}{18} \\ 4 & \frac{3}{36} = \frac{1}{12} \\ 5 & \frac{4}{36} = \frac{1}{6} \\6 & \frac{5}{36} \\7 & \frac{6}{36} = \frac{1}{6}\\8 & \frac{5}{36}\\9 &\frac{4}{36} = \frac{1}{6} \\10 & \frac{3}{36} = \frac{1}{12}\\11&\frac{2}{36} = \frac{1}{18} \\12 & \frac{1}{36} \end{array}\)
One thing to notice about distributions is that if we make a preliminary table, as we just did, of the events consisting of all outcomes which give a particular value when plugged into the RV, then we will have a collection of disjoint events which exhausts all of the sample space. What this means is that the sum of the probability values in the distribution table of an RV is the probability of the whole sample space of that RV's experiment. Therefore
FACT 4.3.8. The sum of the probabilities in a distribution table for a random variable must always equal \(1\).
It is quite a good idea, whenever you write down a distribution, to check that this Fact is true in your distribution table, simply as a sanity check against simple arithmetic errors.
Since we cannot predict what exactly will be the outcome each time we perform a random experiment, we cannot predict with precision what will be the value of an RV on that experiment, each time. But, as we did with the basic idea of probability, maybe we can at least learn something from the long-term trends. It turns out that it is relatively easy to figure out the mean value of an RV over a large number of runs of the experiment.
Say \(X\) is a discrete RV, for which the distribution tells us that \(X\) takes the values \(x_1, \dots, x_n\), each with corresponding probability \(p_1, \dots, p_n\). Then the frequentist view of probability says that the probability \(p_i\) that \(X=x_i\) is (approximately) \(n_i/N\), where \(n_i\) is the number of times \(X=x_i\) out of a large number \(N\) of runs of the experiment. But if \[p_i = n_i/N\] then, multiplying both sides by \(N\), \[n_i = p_i\,N \ .\] That means that, out of the \(N\) runs of the experiment, \(X\) will have the value \(x_1\) in \(p_1\,N\) runs, the value \(x_2\) in \(p_2\,N\) runs, etc. So the sum of \(X\) over those \(N\) runs will be \[(p_1\,N)x_1+(p_2\,N)x_2 + \dots + (p_n\,N)x_n\ .\] Therefore the mean value of \(X\) over these \(N\) runs will be the total divided by \(N\), which is \(p_1\,x_1 + \dots + p_n x_n\). This motivates the definition
DEFINITION 4.3.9. Given a discrete RV \(X\) which takes on the values \(x_1, \dots, x_n\) with probabilities \(p_1, \dots, p_n\), the expectation [sometimes also called the expected value] of \(X\) is the value \[E(X) = \sum p_i\,x_i\ .\]
By what we saw just before this definition, we have the following
FACT 4.3.10. The expectation of a discrete RV is the mean of its values over many runs of the experiment.
Note: The attentive reader will have noticed that we dealt above only with the case of a finite RV, not the case of a countably infinite one. It turns out that all of the above works quite well in that more complex case as well, so long as one is comfortable with a bit of mathematical technology called "summing an infinite series." We do not assume such a comfort level in our readers at this time, so we shall pass over the details of expectations of infinite, discrete RVs.
EXAMPLE 4.3.11. Let's compute the expectation of net profit RV \(X\) in the silly betting game of Example 4.3.2, whose distribution we computed in Example 4.3.6. Plugging straight into the definition, we see \[E(X)=\sum p_i\,x_i = \frac12\cdot5 + \frac12\cdot(-5)=2.5-2.5 = 0 \ .\] In other words, your average net gain playing this silly game many times will be zero. Note that does not mean anything like "if you lose enough times in a row, the chances of starting to win again will go up," as many gamblers seem to believe, it just means that, in the very long run, we can expect the average winnings to be approximately zero – but no one knows how long that run has to be before the balancing of wins and losses happens9.
A more interesting example is
EXAMPLE 4.3.12. In Example 4.3.7 we computed the distribution of the random variable \(X=\text{``{\it sum of the face values}''}\) on the "roll two fair dice, independently" random experiment from Example 4.3.4. It is therefore easy to plug the values of the probabilities and RV values from the distribution table into the formula for expectation, to get \[\begin{aligned} E(X) &=\sum p_i\,x_i\\ &= \frac1{36}\cdot2 + \frac2{36}\cdot3 + \frac3{36}\cdot4 + \frac4{36}\cdot5 + \frac5{36}\cdot6 + \frac6{36}\cdot7 + \frac5{36}\cdot8 + \frac4{36}\cdot9 + \frac3{36}\cdot10\\ &\hphantom{= \frac1{36}\cdot2 + \frac2{36}\cdot3 + \frac3{36}\cdot4 + \frac4{36}\cdot5 + \frac5{36}\cdot6 + \frac6{36}\cdot7 + \frac5{36}\cdot8\ } + \frac2{36}\cdot11 + \frac1{36}\cdot12\\ &= \frac{2\cdot1 + 3\cdot2 + 4\cdot3 + 5\cdot4 + 6\cdot5 + 7\cdot6 + 8\cdot5 + 9\cdot4 + 10\cdot3 + 11\cdot2 + 12\cdot1}{36}\\ &= 7\end{aligned}\] So if you roll two fair dice independently and add the numbers which come up, then do this process many times and take the average, in the long run that average will be the value \(7\).
What about continuous random variables? Definition 4.3.5 of distribution explicitly excluded the case of continuous RVs, so does that mean we cannot do probability calculations in that case?
There is, when we think about it, something of a problem here. A distribution is supposed to be a list of possible values of the RV and the probability of each such value. But if some continuous RV has values which are an interval of real numbers, there is just no way to list all such numbers – it has been known since the late 1800s that there is no way to make a list like that (see , for a description of a very pretty proof of this fact). In addition, the chance of some random process producing a real number that is exactly equal to some particular value really is zero: for two real numbers to be precisely equal requires infinite accuracy ... think of all of those decimal digits, marching off in orderly rows to infinity, which must match between the two numbers.
Rather than a distribution, we do the following:
DEFINITION 4.3.13. Let \(X\) be a continuous random variable whose values are the real interval \([x_{min},x_{max}]\), where either \(x_{min}\) or \(x_{max}\) or both may be \(\infty\). A [probability] density function for \(X\) is a function \(f(x)\) defined for \(x\) in \([x_{min},x_{max}]\), meaning it is a curve with one \(y\) value for each \(x\) in that interval, with the property that \[P(a<X<b) = \left\{\begin{matrix}\text{the area in the xy-plane above the x-axis, below}\\ \text{the curve y=f(x) and between x=a and x=b.}\end{matrix}\right.\ .\]
Graphically, what is going on here is
Because of what we know about probabilities, the following is true (and fairly easy to prove):
FACT 4.3.14. Suppose \(f(x)\) is a density function for the continuous RV \(X\) defined on the real interval \([x_{min},x_{max}]\). Then
For all \(x\) in \([x_{min},x_{max}]\), \(f(x)\ge0\).
The total area under the curve \(y=f(x)\), above the \(x\)-axis, and between \(x=x_{min}\) and \(x=x_{max}\) is \(1\).
If we want the idea of picking a real number on the interval \([x_{min},x_{max}]\) at random, where at random means that all numbers have the same chance of being picked (along the lines of fair in Definition 4.1.20, the height of the density function must be the same at all \(x\). In other words, the density function \(f(x)\) must be a constant \(c\). In fact, because of the above Fact 4.3.14, that constant must have the value \(\frac1{x_{max}-x_{min}}\). There is a name for this:
DEFINITION 4.3.15. The uniform distribution on \([x_{min},x_{max}]\) is the distribution for the continuous RV whose values are the interval \([x_{min},x_{max}]\) and whose density function is the constant function \(f(x)=\frac1{x_{max}-x_{min}}\).
EXAMPLE 4.3.16. Suppose you take a bus to school every day and because of a chaotic home life (and, let's face it, you don't like mornings), you get to the bus stop at a pretty nearly perfectly random time. The bus also doesn't stick perfectly to its schedule – but it is guaranteed to come at least every \(30\) minutes. What this adds up to is the idea that your waiting time at the bus stop is a uniformly distributed RV on the interval \([0,30]\).
If you wonder one morning how likely it then is that you will wait for less than \(10\) minutes, you can simply compute the area of the rectangle whose base is the interval \([0,10]\) on the \(x\)-axis and whose height is \(\frac1{30}\), which will be \[P(0<X<10)=\text{\it base}\cdot\text{\it height}=10\cdot\frac1{30}=\frac13\ .\] A picture which should clarify this is
where the area of the shaded region represents the probability of having a waiting time from \(0\) to \(10\) minutes.
One technical thing that can be confusing about continuous RVs and their density functions is the question of whether we should write \(P(a<X<b)\) or \(P(a\le X\le b)\). But if you think about it, we really have three possible events here: \[\begin{aligned} A &= \{\text{\it outcomes such that $X=a$}\},\\ M &= \{\text{\it outcomes such that $a<X<b$}\},\text{\ and}\\ B &= \{\text{\it outcomes such that $X=b$}\}\ .\end{aligned}\] Since \(X\) always takes on exactly one value for any particular outcome, there is no overlap between these events: they are all disjoint. That means that \[P(A\cup M\cup B) = P(A)+P(M)+P(B) = P(M)\] where the last equality is because, as we said above, the probability of a continuous RV taking on exactly one particular value, as it would in events \(A\) and \(B\), is \(0\). The same would be true if we added merely one endpoint of the interval \((a,b)\). To summarize:
FACT 4.3.17. If \(X\) is a continuous RV with values forming the interval \([x_{min},x_{max}]\) and \(a\) and \(b\) are in this interval, then \[P(a<X<b) = P(a<X\le b) = P(a\le X<b) = P(a\le X\le b)\ .\]
As a consequence of this fact, some authors write probability formulæ about continuous RVs with "\({}<{}\)" and some with "\({}\le{}\)" and it makes no difference.
Let's do a slightly more interesting example than the uniform distribution:
EXAMPLE 4.3.18. Suppose you repeatedly throw darts at a dartboard. You're not a machine, so the darts hit in different places every time and you think of this as a repeatable random experiment whose outcomes are the locations of the dart on the board. You're interested in the probabilities of getting close to the center of the board, so you decide for each experimental outcome (location of a dart you threw) to measure its distance to the center – this will be your RV \(X\).
Being good at this game, you hit near the center more than near the edge and you never completely miss the board, whose radius is \(10cm\)– so \(X\) is more likely to be near \(0\) than near \(10\), and it is never greater than \(10\). What this means is that the RV has values forming the interval \([0,10]\) and the density function, defined on the same interval, should have its maximum value at \(x=0\) and should go down to the value \(0\) when \(x=10\).
You decide to model this situation with the simplest density function you can think of that has the properties we just noticed: a straight line from the highest point of the density function when \(x=0\) down to the point \((10,0)\). The figure that will result will be a triangle, and since the total area must be \(1\) and the base is \(10\) units long, the height must be \(.2\) units. [To get that, we solved the equation \(1=\frac12bh=\frac1210h=5h\) for \(h\).] So the graph must be
and the equation of this linear density function would be \(y=-\frac1{50}x+.2\) [why? – think about the slope and \(y\)-intercept!].
To the extent that you trust this model, you can now calculate the probabilities of events like, for example, "hitting the board within that center bull's-eye of radius \(1.5cm\)," which probability would be the area of the shaded region in this graph:
The upper-right corner of this shaded region is at \(x\)-coordinate \(1.5\) and is on the line, so its \(y\)-coordinate is \(-\frac1{50}1.5+.2=.17\) . Since the region is a trapezoid, its area is the distance between the two parallel sides times the average of the lengths of the other two sides, giving \[P(0<X<1.5) = 1.5\cdot\frac{.2+.17}2 = .2775\ .\] In other words, the probability of hitting the bull's-eye, assuming this model of your dart-throwing prowess, is about \(28\)%.
If you don't remember the formula for the area of a trapezoid, you can do this problem another way: compute the probability of the complementary event, and then take one minus that number. The reason to do this would be that the complementary event corresponds to the shaded region here
which is a triangle! Since we surely do remember the formula for the area of a triangle, we find that \[P(1.5<X<10)=\frac12bh=\frac{1}{2}.17\cdot8.5=.7225\] and therefore \(P(0<X<1.5)=1-P(1.5<X<10)=1-.7225=.2775\). [It's nice that we got the same number this way, too!]
We've seen some examples of continuous RVs, but we have yet to meet the most important one of all.
DEFINITION 4.3.19. The Normal distribution with mean \(\ \mu X\) and standard deviation \(\ \sigma X\) is the continuous RV which takes on all real values and is governed by the probability density function \[\rho(x)=\frac1{\sqrt{2 \sigma X^2\pi}}e^{-\frac{(x- \mu X)^2}{2 \sigma X^2}}\ .\] If \(X\) is a random variable which follows this distribution, then we say that \(X\) is Normally distributed with mean \(\ \mu X\) and standard deviation \(\ \sigma X\) or, in symbols, \(X\) is \(N(\ \mu X, \sigma X)\).
[More technical works also call this the Gaussian distribution, named after the great mathematician Carl Friedrich Gauss. But we will not use that term again in this book after this sentence ends.]
The good news about this complicated formula is that we don't really have to do anything with it. We will collect some properties of the Normal distribution which have been derived from this formula, but these properties are useful enough, and other tools such as modern calculators and computers which can find specific areas we need under the graph of \(y=\rho(x)\), that we won't need to work directly with the above formula for \(\rho(x)\) again. It is nice to know that \(N(\mu X, \sigma X)\) does correspond to a specific, known density function, though, isn't it?
It helps to start with an image of what the Normal distribution looks like. Here is the density function for \(\mu X=17\) and \(\sigma X=3\):
Now let's collect some of these useful facts about the Normal distributions.
FACT 4.3.20. The density function ρ for the Normal distribution N(μX,σX) is a positive function for all values of x and the total area under the curve y = ρ(x) is 1.
This simply means that ρ is a good candidate for the probability density function for some continuous RV.
FACT 4.3.21. The density function ρ for the Normal distribution N (μX,σX) is unimodal with maximum at x-coordinate μX.
This means that N (μX , σX ) is a possible model for an RV X which tends to have one main, central value, and less often has other values farther away. That center is at the location given by the parameter μX , so wherever we want to put the center of our model for X, we just use that for μX.
FACT 4.3.22. The density function ρ for the Normal distribution N (μX, σX) is symmetric when reflected across the line x = μX.
This means that the amount X misses its center, μX, tends to be about the same when it misses above μX and when it misses below μX. This would correspond to situations were you hit as much to the right as to the left of the center of a dartboard. Or when randomly picked people are as likely to be taller than the average height as they are to be shorter. Or when the time it takes a student to finish a standardized test is as likely to be less than the average as it is to be more than the average. Or in many, many other useful situations.
FACT 4.3.23. The density function ρ for the Normal distribution N(μX,σX) has has tails in both directions which are quite thin, in fact get extremely thin as x → ±∞, but never go all the way to 0.
This means that N(μX,σX) models situations where the amount X deviates from its average has no particular cut-off in the positive or negative direction. So you are throwing darts at a dart board, for example, and there is no way to know how far your dart may hit to the right or left of the center, maybe even way off the board and down the hall – although that may be very unlikely. Or perhaps the time it takes to complete some task is usually a certain amount, but every once and a while it might take much more time, so much more that there is really no natural limit you might know ahead of time.
At the same time, those tails of the Normal distribution are so thin, for values far away from μX , that it can be a good model even for a situation where there is a natural limit to the values of X above or below μX. For example, heights of adult males (in inches) in the United States are fairly well approximated by N(69,2.8), even though heights can never be less than 0 and N (69, 2.8) has an infinitely long tail to the left – because while that tail is non-zero all the way as x → −∞, it is very, very thin.
All of the above Facts are clearly true on the first graph we saw of a Normal distribution density function.
FACT 4.3.24. The graph of the density function ρ for the Normal distribution N(μX,σX) has a taller and narrower peak if σX is smaller, and a lower and wider peak if σX is larger.
This allows the statistician to adjust how much variation there typically is in a normally distributed RV: By making σX small, we are saying that an RV X which is N(μX,σX) is very likely to have values quite close to its center, μX. If we make σX large, however, X is more likely to have values all over the place – still, centered at μX, but more likely to wander farther away.
Let's make a few versions of the graph we saw for ρ when μX was 17 and σX was 3, but now with different values of σX. First, if σX = 1, we get
If, instead, σX = 5, then we get
Finally, let's superimpose all of the above density functions on each other, for one, combined graph:
This variety of Normal distributions (one for each μX and σX ) is a bit bewildering, so traditionally, we concentrate on one particularly nice one.
DEFINITION 4.3.25. The Normal distribution with mean μX = 0 and standard deviation σX = 1 is called the standard Normal distribution and an RV [often written with the variable Z ] that is N (0, 1) is described as a standard Normal RV.Here is what the standard Normal probability density function looks like:
Here is what the standard Normal probability density function looks like:
One nice thing about the standard Normal is that all other Normal distributions can be related to the standard.
FACT 4.3.26. If X is N(μX,σX), then Z = (X−μX)/σX is standard Normal.
This has a name.
DEFINITION 4.3.27. The process of replacing a random variable X which is N(μX, σX) with the standard normal RV Z = (X − μX )/σX is called standardizing a Normal RV.
It used to be that standardization was an important step in solving problems with Normal RVs. A problem would be posed with information about some data that was modelled by a Normal RV with given mean μX and standardization σX . Then questions about probabilities for that data could be answered by standardizing the RV and looking up values in a single table of areas under the standard Normal curve.
Today, with electronic tools such as statistical calculators and computers, the standardization step is not really necessary.
EXAMPLE 4.3.28. As we noted above, the heights of adult men in the United States, when measured in inches, give a RV \(X\) which is \(N(69, 2.8)\). What percentage of the population, then, is taller than \(6\) feet?
First of all, the frequentist point of view on probability tells us that what we are interested in is the probability that a randomly chosen adult American male will be taller than 6 feet – that will be the same as the percentage of the population this tall. In other words, we must find the probability that X > 72, since in inches, 6 feet becomes 72. As X is a continuous RV, we must find the area under its density curve, which is the ρ for N (69, 2.8), between 72 and ∞.
That ∞ is a little intimidating, but since the tails of the Normal distribution are very thin, we can stop measuring area when x is some large number and we will have missed only a very tiny amount of area, so we will have a very good approximation. Let's therefore find the area under ρ from x = 72 up to x = 1000. This can be done in many ways:
With a wide array of online tools – just search for "online normal probability calculator." One of these yields the value .142.
With a TI-8x calculator, by typing
normalcdf(72, 1000, 69, 2.8)
which yields the value .1419884174. The general syntax here is
normalcdf(a, b, μX, σX)
to find P(a < X < b) when X is N(μX,σX). Note you get normalcdf by typing
Spreadsheets like LibreOffice Calc and Microsoft Excel will compute this by putting the following in a cell
=1-NORM.DIST(72, 69, 2.8, 1)
giving the value 0.1419883859. Here we are using the command
NORM.DIST(b, μX, σX, 1)
which computes the area under the density function for N (μX, σX) from −∞ to b. [The last input of "1" to NORM.DIST just tells it that we want to compute the area under the curve. If we used "0" instead, it would simple tell us the particular value of ρ(b), which is of very direct little use in probability calculations.] Therefore, by doing 1 − NORM.DIST(72,69,2.8,1), we are taking the total area of 1 and subtracting the area to the left of 72, yielding the area to the right, as we wanted.
Therefore, if you want the area between a and b on an N(μX, σX) RV using a spreadsheet, you would put
=NORM.DIST(b, μX, σX, 1) - NORM.DIST(a, μX, σX, 1)
in a cell.
While standardizing a non-standard Normal RV and then looking up values in a table is an old-fashioned method that is tedious and no longer really needed, one old technique still comes in handy some times. It is based on the following:
FACT 4.3.29. The 68-95-99.7 Rule: Let X be an N(μX ,σX) RV. Then some special values of the area under the graph of the density curve ρ for X are nice to know:
The area under the graph of ρ from x=μX −σX to x=μX +σX, also known as P(μX −σX <X<μX +σX), is .68.
The area under the graph of ρ from x=μX −2σX to x=μX +2σX, also known as P(μX −2σX <X <μX +2σX), is .95.
The area under the graph of ρ from x=μX −3σX to x=μX +3σX, also known as P(μX −3σX <X <μX +3σX), is .997.
This is also called The Empirical Rule by some authors. Visually3:
3By Dan Kernler - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index. php?curid=36506025 .
In order to use the 68-95-99.7 Rule in understanding a particular situation, it is helpful to keep an eye out for the numbers that it talks about. Therefore, when looking at a problem, one should notice if the numbers μX +σX, μX −σX, μX +2σX, μX −2σX, μX +3σX, or μX − 3σX are ever mentioned. If so, perhaps this Rule can help.
EXAMPLE 4.3.30. In Example 4.3.28, we needed to compute P(X > 72) where X was known to be N(69,2.8). Is 72 one of the numbers for which we should be looking, to use the Rule? Well, it's greater than μX = 69, so we could hope that it was μX + σX , μX + 2σX, or μX + 3σX. But values are
μX +σX =69+2.8=71.8,
μX +2σX =69+5.6=74.6,and
μX +3σX =69+8.4=77.4,
none of which is what we need.
Well, it is true that 72 ≈ 71.8, so we could use that fact and accept that we are only getting an approximate answer – an odd choice, given the availability of tools which will give us extremely precise answers, but let's just go with it for a minute.
Let's see, the above Rule tells us that
P(66.2<X <71.8)=P(μX −σX <X <μX +σX)=.68.
Now since the total area under any density curve is 1,
P(X <66.2orX >71.8)=1−P(66.2<X <71.8)=1−.68=.32.
Since the event "X < 66.2" is disjoint from the event "X > 71.8" (X only takes on one value at a time, so it cannot be simultaneously less than 66.2 and greater than 71.8), we can use the simple rule for addition of probabilities:
.32=P(X <66.2orX >71.8)=P(X <66.2)+P(X >71.8).
Now, since the density function of the Normal distribution is symmetric around the line x = μX, the two terms on the right in the above equation are equal, which means that
P(X >71.8)=\(\ \frac{1}{2}\) (P(X <66.2)+P(X >71.8))=\(\ \frac{1}{2}\).32=.16.
It might help to visualize the symmetry here as the equality of the two shaded areas in the following graph
Now, using the fact that 72 ≈ 71.8, we may say that
P (X > 72) ≈ P (X > 71.8) = .16
which, since we know that in fact P (X > 72) = .1419883859, is not a completely terrible approximation.
EXAMPLE 4.3.31. Let's do one more computation in the context of the heights of adult American males, as in the immediately above Example 4.3.30, but now one in which the 68-95-99.7 Rule gives a more precise answer.
So say we are asked this time what proportion of adult American men are shorter than 63.4 inches. Why that height, in particular? Well, it's how tall archaeologists have deter- mined King Tut was in life. [No, that's made up. It's just a good number for this problem.]
Again, looking through the values μX ± σX, μX ± 2σX, and μX ± 3σX, we notice that
63.4=69−5.6=μX −2σX .
Therefore, to answer what fraction of adult American males are shorter than 63.4 inches amounts to asking what is the value of P (X < μX − 2σX).
What we know about μX ± 2σX is that the probability of X being between those two values is P(μX − 2σX < X < μX + 2σX) = .95. As in the previous Example, the complementary event to "μX − 2σX < X < μX + 2σX," which will have probability .05, consists of two pieces "X < μX − 2σX" and "X > μX + 2σX," which have the same area by symmetry. Therefore
\(\begin{aligned} P(X<63.4) &=P\left(X<\mu_{X}-2 \sigma_{X}\right) \\ &=\frac{1}{2}\left[P\left(X<\mu_{X}-2 \sigma_{X}\right)+P\left(X>\mu_{X}+2 \sigma_{X}\right)\right] \\ &=\frac{1}{2} P\left(X<\mu_{X}-2 \sigma_{X} \text { or } X>\mu_{X}+2 \sigma_{X}\right) \text { since they're disjoint } \\ &=\frac{1}{2} P\left(\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \\ &=\frac{1}{2}\left[1-P\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \quad \\ &=\frac{1}{2}\left[1-P\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \quad \text { by prob. for complements } \\ &=\frac{1}{2} \cdot 05 \\ &=.025 \end{aligned}\)
Just the way finding the particular X values μX ± σX, μX ± 2σX, and μX ± 3σX in a particular situation would tell us the 68-95-99.7 Rule might be useful, so also would finding the probability values .68, .95, 99.7, or their complements .32, .05, or .003, – or even half of one of those numbers, using the symmetry.
EXAMPLE 4.3.32. Continuing with the scenario of Example 4.3.30, let us now figure out what is the height above which there will only be .15% of the population.
Notice that .15%, or the proportion .0015, is not one of the numbers in the 68-95-99.7 Rule, nor is it one of their complements – but it is half of one of the complements, being half of .003 . Now, .003 is the complementary probability to .997, which was the probability in the range μX ± 3σX. As we have seen already (twice), the complementary area to that in the region between μX ± 3σX consists of two thin tails which are of equal area, each of these areas being \(\ \frac{1}{2}\)(1 − .997) = .0015 . This all means that the beginning of that upper tail, above which value lies .15% of the population, is the X value μX +3σX =68+3·2.8=77.4.
Therefore .15% of adult American males are taller than 77.4 inches.
4.2: Conditional Probability
4.4: Exercises
Jonathan A. Poritz | CommonCrawl |
IPI Home
Weighted area constraints-based breast lesion segmentation in ultrasound image analysis
April 2022, 16(2): 467-479. doi: 10.3934/ipi.2021058
Counterexamples to inverse problems for the wave equation
Tony Liimatainen 1,2, and Lauri Oksanen 2,
Department of Mathematics and Statistics, University of Jyväskylä, Jyväskylä, Finland
Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland
Received January 2021 Published April 2022 Early access October 2021
We construct counterexamples to inverse problems for the wave operator on domains in $ \mathbb{R}^{n+1} $, $ n \ge 2 $, and on Lorentzian manifolds. We show that non-isometric Lorentzian metrics can lead to same partial data measurements, which are formulated in terms certain restrictions of the Dirichlet-to-Neumann map. The Lorentzian metrics giving counterexamples are time-dependent, but they are smooth and non-degenerate. On $ \mathbb{R}^{n+1} $ the metrics are conformal to the Minkowski metric.
Keywords: Inverse problems, counterexamples, wave equation, conformal scaling, Lorentzian manifold, partial data, hidden conformal invariance.
Mathematics Subject Classification: 35R30, 35L05, 58J45.
Citation: Tony Liimatainen, Lauri Oksanen. Counterexamples to inverse problems for the wave equation. Inverse Problems & Imaging, 2022, 16 (2) : 467-479. doi: 10.3934/ipi.2021058
S. Alexakis, A. Feizmohammadi and L. Oksanen, Lorentzian Calderón problem under curvature bounds, Preprint arXiv: 2008.07508, 2020. Google Scholar
A. L. Besse, Einstein Manifolds, Classics in Mathematics. Springer-Verlag, Berlin, 2008. Google Scholar
S. N. Curry and A. R. Gover, An introduction to conformal geometry and tractor calculus, with a view to applications in general relativity, volume 443 of London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, Cambridge, 2018, 86-170. Google Scholar
T. Daudé, N. Kamran and F. Nicoleau, A survey of non-uniqueness results for the anisotropic Calder´on problem with disjoint data, In Nonlinear Analysis in Geometry and Applied Mathematics. Part 2, volume 2 of Harv. Univ. Cent. Math. Sci. Appl. Ser. Math., Int. Press, Somerville, MA, 2018, 77-101. Google Scholar
T. Daudé, N. Kamran and F. Nicoleau, Non-uniqueness results for the anisotropic Calderón problem with data measured on disjoint sets, Ann. Inst. Fourier (Grenoble), 69 (2019), 119-170. doi: 10.5802/aif.3240. Google Scholar
T. Daudé, N. Kamran and F. Nicoleau, On the hidden mechanism behind non-uniqueness for the anisotropic Calderón problem with data on disjoint sets, Ann. Henri Poincaré, 20 (2019), 859-887. doi: 10.1007/s00023-018-00755-2. Google Scholar
T. Daudé, N. Kamran and F. Nicoleau, The anisotropic Calderón problem for singular metrics of warped product type: The borderline between uniqueness and invisibility, J. Spectr. Theory, 10 (2020), 703-746. doi: 10.4171/JST/310. Google Scholar
T. Daudé, N. Kamran and F. Nicoleau, On nonuniqueness for the anisotropic Calderón problem with partial data, Forum Math. Sigma, 8 (2020), Paper No. e7, 17 pp. doi: 10.1017/fms.2020.1. Google Scholar
G. Eskin, Inverse hyperbolic problems with time-dependent coefficients, Comm. Partial Differential Equations, 32 (2007), 1737–1758. doi: 10.1080/03605300701382340. Google Scholar
A. Greenleaf, M. Lassas and G. Uhlmann, On nonuniqueness for Calderón's inverse problem, Math. Res. Lett., 10 (2003), 685-693. doi: 10.4310/MRL.2003.v10.n5.a11. Google Scholar
L. Hörmander, The Analysis of Linear Partial Differential Operators. III, Classics in Mathematics. Springer, Berlin, 2007. doi: 10.1007/978-3-540-49938-1. Google Scholar
A. Katchalov, Y. Kurylev and M. Lassas, Inverse Boundary Spectral Problems, volume 123 of Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, Chapman & Hall/CRC, Boca Raton, FL, 2001. doi: 10.1201/9781420036220. Google Scholar
C. Kenig and M. Salo, The Calderón problem with partial data on manifolds and applications, Analysis and PDE, 6 (2013), 2003-2048. doi: 10.2140/apde.2013.6.2003. Google Scholar
Y. Kian and L. Oksanen, Recovery of time-dependent coefficient on {R}iemannian manifold for hyperbolic equations, Int. Math. Res. Not. IMRN, (2019), 5087–5126. doi: 10.1093/imrn/rnx263. Google Scholar
Y. Kian, Y. Kurylev, M. Lassas and L. Oksanen, Unique recovery of lower order coefficients for hyperbolic equations from data on disjoint sets, J. Differential Equations, 267 (2019), 2210-2238. doi: 10.1016/j.jde.2019.03.008. Google Scholar
M. Lassas and T. Liimatainen, Conformal harmonic coordinates, To appear in Communications in Analysis and Geometry, Preprint arXiv: 1912.08030, 2019. Google Scholar
M. Lassas, T. Liimatainen and M. Salo, The Calderón problem for the conformal Laplacian, To appear in Communications in Analysis and Geometry, Preprint arXiv: 1612.07939, 2016. Google Scholar
M. Lassas and L. Oksanen, An inverse problem for a wave equation with sources and observations on disjoint sets, Inverse Problems, 26 (2010), 085012, 19 pp. doi: 10.1088/0266-5611/26/8/085012. Google Scholar
M. Lassas and L. Oksanen, Inverse problem for the Riemannian wave equation with Dirichlet data and {N}eumann data on disjoint sets, Duke Mathematical Journal, 163 (2014), 1071-1103. doi: 10.1215/00127094-2649534. Google Scholar
M. Lassas, M. Taylor and G. Uhlmann, The Dirichlet-to-Neumann map for complete Riemannian manifolds with boundary, Comm. Geom. Anal., 11 (2003), 207-222. doi: 10.4310/CAG.2003.v11.n2.a2. Google Scholar
J. M. Lee and T. H. Parker, The Yamabe problem, Bull. Amer. Math. Soc. (N.S.), 17 (1987), 37-91. doi: 10.1090/S0273-0979-1987-15514-5. Google Scholar
W. R. B. Lionheart, Conformal uniqueness results in anisotropic electrical impedance imaging, Inverse Problems, 13 (1997), 125-134. doi: 10.1088/0266-5611/13/1/010. Google Scholar
J. B. Pendry, D. Schurig and D. R. Smith, Controlling electromagnetic fields, Science, 312 (2006), 1780-1782. doi: 10.1126/science.1125907. Google Scholar
Rakesh, Characterization of transmission data for Webster's horn equation, Inverse Problems, 16 (2000), L9–L24. doi: 10.1088/0266-5611/16/2/102. Google Scholar
D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr and D. R. Smith, Metamaterial electromagnetic cloak at microwave frequencies, Science, 314 (2006), 977-980. doi: 10.1126/science.1133628. Google Scholar
G. Uhlmann, Inverse problems: Seeing the unseen, Bull. Math. Sci., 4 (2014), 209-279. doi: 10.1007/s13373-014-0051-9. Google Scholar
Xiaosheng Li, Gunther Uhlmann. Inverse problems with partial data in a slab. Inverse Problems & Imaging, 2010, 4 (3) : 449-462. doi: 10.3934/ipi.2010.4.449
Fatemeh Ahangari. Conformal deformations of a specific class of lorentzian manifolds with non-irreducible holonomy representation. Numerical Algebra, Control & Optimization, 2019, 9 (4) : 401-412. doi: 10.3934/naco.2019039
Fioralba Cakoni, Rainer Kress. Integral equations for inverse problems in corrosion detection from partial Cauchy data. Inverse Problems & Imaging, 2007, 1 (2) : 229-245. doi: 10.3934/ipi.2007.1.229
Sebastian Acosta. A control approach to recover the wave speed (conformal factor) from one measurement. Inverse Problems & Imaging, 2015, 9 (2) : 301-315. doi: 10.3934/ipi.2015.9.301
Suman Kumar Sahoo, Manmohan Vashisth. A partial data inverse problem for the convection-diffusion equation. Inverse Problems & Imaging, 2020, 14 (1) : 53-75. doi: 10.3934/ipi.2019063
Li Liang. Increasing stability for the inverse problem of the Schrödinger equation with the partial Cauchy data. Inverse Problems & Imaging, 2015, 9 (2) : 469-478. doi: 10.3934/ipi.2015.9.469
Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060
Anna Doubova, Enrique Fernández-Cara. Some geometric inverse problems for the linear wave equation. Inverse Problems & Imaging, 2015, 9 (2) : 371-393. doi: 10.3934/ipi.2015.9.371
Lok Ming Lui, Chengfeng Wen, Xianfeng Gu. A conformal approach for surface inpainting. Inverse Problems & Imaging, 2013, 7 (3) : 863-884. doi: 10.3934/ipi.2013.7.863
Frank Natterer. Incomplete data problems in wave equation imaging. Inverse Problems & Imaging, 2010, 4 (4) : 685-691. doi: 10.3934/ipi.2010.4.685
Toshiyuki Suzuki. Semilinear Schrödinger evolution equations with inverse-square and harmonic potentials via pseudo-conformal symmetry. Communications on Pure & Applied Analysis, 2021, 20 (12) : 4347-4377. doi: 10.3934/cpaa.2021163
Zuxing Xuan. On conformal measures of parabolic meromorphic functions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 249-257. doi: 10.3934/dcdsb.2015.20.249
Peter Haïssinsky, Kevin M. Pilgrim. Examples of coarse expanding conformal maps. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2403-2416. doi: 10.3934/dcds.2012.32.2403
Nicholas Hoell, Guillaume Bal. Ray transforms on a conformal class of curves. Inverse Problems & Imaging, 2014, 8 (1) : 103-125. doi: 10.3934/ipi.2014.8.103
Tomasz Szarek, Mariusz Urbański, Anna Zdunik. Continuity of Hausdorff measure for conformal dynamical systems. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4647-4692. doi: 10.3934/dcds.2013.33.4647
Hans Henrik Rugh. On dimensions of conformal repellers. Randomness and parameter dependency. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2553-2564. doi: 10.3934/dcds.2012.32.2553
Mario Roy, Mariusz Urbański. Multifractal analysis for conformal graph directed Markov systems. Discrete & Continuous Dynamical Systems, 2009, 25 (2) : 627-650. doi: 10.3934/dcds.2009.25.627
Marcelo M. Disconzi. On the existence of solutions and causality for relativistic viscous conformal fluids. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1567-1599. doi: 10.3934/cpaa.2019075
Domenico Mucci. Maps into projective spaces: Liquid crystal and conformal energies. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 597-635. doi: 10.3934/dcdsb.2012.17.597
Rossen I. Ivanov. Conformal and Geometric Properties of the Camassa-Holm Hierarchy. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 545-554. doi: 10.3934/dcds.2007.19.545
2020 Impact Factor: 1.639
Tony Liimatainen Lauri Oksanen | CommonCrawl |
CoordinatedGeometry MAT
Problem - 4760
In the diagram below, a line is tangent to a unit circle centered at $Q (1, 1)$ and intersects the two axes at $P$ and $R$, respectively. The angle $\angle{OPR}=\theta$. The area bounded by the circle and the $x-$axis is $A(\theta)$ and the are bounded by the circle and the $y-$axis is $B(\theta)$.
Show the coordinates of the point $Q$ is $(1+\sin\theta, 1+\cos\theta)$. Find the equation of line $PQR$ and determine the coordinates of $P$.
Explain why $A(\theta)=B\left(\frac{\pi}{2}-\theta\right)$ always holds and calculates $A\left(\frac{\pi}{2}\right)$.
Show that $A\left(\frac{\pi}{3}\right)=\sqrt{3}-\frac{\pi}{3}$.
The solution for this problem is available for $0.99. You can also purchase a pass for all available solutions for $99.
© 2009 - 2023 Math All Star | CommonCrawl |
VOL. 17 · NO. 2 | April 2013
Framed BPS states
Davide Gaiotto, Gregory W. Moore, Andrew Neitzke
Adv. Theor. Math. Phys. 17 (2), 241-397, (April 2013)
We consider a class of line operators in $d = 4, \mathcal{N} = 2$ supersymmetric field theories, which leave four supersymmetries unbroken. Such line operators support a new class of BPS states which we call "framed BPS states." These include halo bound states similar to those of $d = 4, \mathcal{N} = 2$ supergravity, where (ordinary) BPS particles are loosely bound to the line operator. Using this construction, we give a new proof of the Kontsevich-Soibelman wall-crossing formula (WCF) for the ordinary BPS particles, by reducing it to the semiprimitive WCF. After reducing on $S^1$, the expansion of the vevs of the line operators in the IR provides a new physical interpretation of the "Darboux coordinates" on the moduli space M of the theory. Moreover, we introduce a "protected spin character" (PSC) that keeps track of the spin degrees of freedom of the framed BPS states. We show that the generating functions of PSCs admit a multiplication, which defines a deformation of the algebra of holomorphic functions on $\mathcal{M}$. As an illustration of these ideas, we consider the sixdimensional (2, 0) field theory of $A_1$ type compactified on a Riemann surface $\mathcal{C}$. Here, we show (extending previous results) that line operators are classified by certain laminations on a suitably decorated version of $\mathcal{C}$, and we compute the spectrum of framed BPS states in several explicit examples. Finally, we indicate some interesting connections to the theory of cluster algebras.
On the mechanics of crystalline solids with a continuous distribution of dislocations
Demetrios Christodoulou, Ivo Kaelin
We formulate the laws governing the dynamics of a crystalline solid in which a continuous distribution of dislocations is present. Our formulation is based on new differential geometric concepts, which in particular relate to Lie groups. We then consider the static case, which describes crystalline bodies in equilibrium in free space. The mathematical problem in this case is the free minimization of an energy integral, and the associated Euler-Lagrange equations constitute a nonlinear elliptic system of partial differential equations. We solve the problem in the simplest cases of interest. | CommonCrawl |
Mathematische Annalen
On time-periodic solutions to parabolic boundary value problems On time-periodic solutions to parabolic boundary value problems
Every knot has characterising slopes Every knot has characterising slopes
The isoperimetric problem for lens spaces The isoperimetric problem for lens spaces
Discrete Moving Frames on Lattice Varieties and Lattice-Based Multispaces Discrete Moving Frames on Lattice Varieties and Lattice-Based Multispaces
Stabilized CutFEM for the convection problem on surfaces Stabilized CutFEM for the convection problem on surfaces
Stable Phase Retrieval in Infinite Dimensions Stable Phase Retrieval in Infinite Dimensions
Positive solutions for nonlinear singular superlinear elliptic equations Positive solutions for nonlinear singular superlinear elliptic equations
Bifurcation from infinity for elliptic problems on \({\mathbb {R}}^N\) Bifurcation from infinity for elliptic problems on \({\mathbb {R}}^N\)
Positive solutions for nonlinear singular elliptic equations of p-Laplacian... Positive solutions for nonlinear singular elliptic equations of p-Laplacian type with dependence on the gradient
The qualitative behavior at the free boundary for approximate harmonic maps from surfaces
Mathematische Annalen, Sep 2018
Jürgen Jost, Lei Liu, Miaomiao Zhu
Jürgen Jost
Lei Liu
Miaomiao Zhu
Let \(\{u_n\}\) be a sequence of maps from a compact Riemann surface M with smooth boundary to a general compact Riemannian manifold N with free boundary on a smooth submanifold \(K\subset N\) satisfying $$\begin{aligned} \sup _n \ \left( \Vert \nabla u_n\Vert _{L^2(M)}+\Vert \tau (u_n)\Vert _{L^2(M)}\right) \le \Lambda , \end{aligned}$$ where \(\tau (u_n)\) is the tension field of the map \(u_n\). We show that the energy identity and the no neck property hold during a blow-up process. The assumptions are such that this result also applies to the harmonic map heat flow with free boundary, to prove the energy identity at finite singular time as well as at infinity time. Also, the no neck property holds at infinity time.
A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.
Alternatively, you can download the file locally and open with any standalone PDF reader:
https://link.springer.com/content/pdf/10.1007%2Fs00208-018-1759-8.pdf
Mathematische Annalen pp 1–45 | Cite as The qualitative behavior at the free boundary for approximate harmonic maps from surfaces AuthorsAuthors and affiliations Jürgen JostLei LiuMiaomiao Zhu Open Access Article First Online: 24 September 2018 Received: 21 March 2018 Revised: 25 August 2018 23 Downloads Abstract Let \(\{u_n\}\) be a sequence of maps from a compact Riemann surface M with smooth boundary to a general compact Riemannian manifold N with free boundary on a smooth submanifold \(K\subset N\) satisfying $$\begin{aligned} \sup _n \ \left( \Vert \nabla u_n\Vert _{L^2(M)}+\Vert \tau (u_n)\Vert _{L^2(M)}\right) \le \Lambda , \end{aligned}$$ where \(\tau (u_n)\) is the tension field of the map \(u_n\). We show that the energy identity and the no neck property hold during a blow-up process. The assumptions are such that this result also applies to the harmonic map heat flow with free boundary, to prove the energy identity at finite singular time as well as at infinity time. Also, the no neck property holds at infinity time. Mathematics Subject Classification53C43 58E20 Communicated by F.C. Marques. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013)/ERC Grant agreement no. 267087. Miaomiao Zhu was supported in part by National Natural Science Foundation of China (No. 11601325). We would like to thank the referee for careful comments and useful suggestions in improving the presentation of the paper. 1 Introduction Let (M, g) be a compact Riemannian manifold with smooth boundary and (N, h) be a compact Riemannian manifold of dimension n. Let \(K\subset N\) be a \(k-\)dimensional closed submanifold where \(1\le k\le n\). For a mapping \(u\in C^2(M,N)\), the energy density of u is defined by $$\begin{aligned} e(u)=\frac{1}{2}|\nabla u|^2=\mathrm{Trace}_gu^*h, \end{aligned}$$ where \(u^*h\) is the pull-back of the metric tensor h. The energy of the mapping u is defined as $$\begin{aligned} E(u)=\int _Me(u)dvol_g. \end{aligned}$$ Define $$\begin{aligned} C(K)=\left\{ u\in C^2(M,N);u(\partial M)\subset K \right\} . \end{aligned}$$ A critical point of the energy E over C(K) is a harmonic map with free boundary \(u(\partial M)\) on K. The problem of the existence, uniqueness and regularity of such harmonic maps with a free boundary was first systematically investigated in [8]. By Nash's embedding theorem, (N, h) can be isometrically embedded into some Euclidean space \({\mathbb {R}}^N\). Then we can get the Euler-Lagrange equation $$\begin{aligned} \Delta _g u=A(u)(\nabla u,\nabla u), \end{aligned}$$ where A is the second fundamental form of \(N\subset {\mathbb {R}}^N\) and \(\Delta _g\) is the Laplace-Beltrami operator on M which is defined by $$\begin{aligned} \Delta _g:=-\frac{1}{\sqrt{g}}\frac{\partial }{\partial x^\beta }\left( \sqrt{g}g^{\alpha \beta }\frac{\partial }{\partial x^\alpha }\right) . \end{aligned}$$ Moreover, for \(1\le k\le n-1\), u has free boundary \(u(\partial M)\) on K, that is $$\begin{aligned} u(x)\in K,\quad du(x)(\overrightarrow{n})\perp T_{u(x)}K, \quad a.e.\;\ x\in \partial M, \end{aligned}$$ (1.1) where \(\overrightarrow{n}\) is the outward unite normal vector on \(\partial M\) and \(\perp \) means orthogonal. Specially, for \(k=n\), u satisfies a homogeneous Neumann condition on K, that is $$\begin{aligned} u(x)\in K,\quad du(x)(\overrightarrow{n})=0, \quad \;a.e.\ x\in \partial M. \end{aligned}$$ (1.2) The tension field \(\tau (u)\) is defined by $$\begin{aligned}&\tau (u)=-\Delta _g u+A(u)(\nabla u,\nabla u). \end{aligned}$$ (1.3) Thus, u is a harmonic map if and only if \(\tau (u)=0\). When we consider a limit of a sequence of maps with uniformly \(L^2\)-bounded tension fields, the domain may decompose into several pieces (a phenomenon called bubbling or blow-up), and the limit map satisfies the equations or bounds on each piece. The question is whether the sum of the energies of the limit map on those pieces equals the limit of the energies of the approximating maps. Affirmative results are called energy identity and no neck property, and the approach is called blow-up theory; the precise definitions will be given below. Because the problem is conformally invariant only in dimension 2, the analysis usually needs to be restricted to that case, and this will also apply to this paper. When M is a closed surface, the compactness problem and the blow-up theory (energy identity and no neck property) for a sequence of maps \(\{u_n\}\) from M to N with uniformly \(L^2\)-bounded tension fields \(\tau (u_n)\) and uniformly bounded energy has been extensively studied (see e.g. [6, 13, 29, 31, 32, 48]), since the fundamental work of Sacks-Uhlenbeck [38]. For sequences of general bounded tension fields, see [20, 21, 26, 49]. For sequences of solutions of more general elliptic systems with an antisymmetric structure, we refer to [16, 18]. For corresponding results about harmonic map flows, see e.g. [24, 31, 32, 44, 47]. For results of other types of approximate sequences for harmonic maps, see e.g. [4, 11, 13, 15, 23]. For the energy identity of harmonic maps from higher dimensional domains, see [25]. In this paper, we shall study the blow-up analysis for a sequence of maps \(\{u_n\}\) from a compact Riemann surface M with smooth boundary \(\partial M\) to a compact Riemannian manifold N with uniformly \(L^2\)-bounded tension fields \(\tau (u_n)\), uniformly bounded energy and with free boundary \(u_n(\partial M)\) on K. Since the interior case is already well understood, we shall focus on the case where the energy concentration occurs at the free boundary and complete the blow-up theory at the free boundary for a bubbling sequence. When boundary blow-up occurs, the corresponding neck domains are in general not simply half annuli and hence a finer decomposition of the neck domains would be necessary in order to carry out the neck analysis (see Sect. 5). In fact, we shall first address the regularity problem at the free boundary for weak solutions (see Sect. 3) of $$\begin{aligned} -\Delta _g u+A(u)(\nabla u,\nabla u)=F\ \ in\ M \end{aligned}$$ (1.4) for some \(F\in L^p(M)\), \(p>1\) and under the free boundary constraint (1.1), as it provides some necessary elliptic estimates at the free boundary, which form the analytical foundation of the blow-up theory for the sequence \(\{u_n\}\) (see Sect. 4). We would like to remark that the regularity at the free boundary for weak solutions of (1.4) can be proved by applying the classical reflection methods for the harmonic map case by Gulliver-Jost [8] and Scheven [39] or a modified reflection method in [3] and [43] which combines Hélein's moving frame method [10] and Scheven's reflection method [39] so that the technique of Rivière-Struwe in [35] (which holds true also in dimension 2) can be applied. The latter was developed for Dirac-harmonic maps which includes harmonic maps as a special case. In this paper, we shall present an alternative approach without using moving frames (see Sect. 3). Now, we state our first main result: Theorem 1.1 Let \(u_n:M\rightarrow N\) be a sequence of \(W^{2,2}\) maps with free boundary \(u_n(\partial M)\) on K \((1\le k\le n)\), satisfying $$\begin{aligned} E(u_n)+\Vert \tau (u_n)\Vert _{L^2(M)}\le \Lambda <\infty , \end{aligned}$$ where \(\tau (u_n)\) is the tension field of \(u_n\). We define the blow-up set $$\begin{aligned} {\mathcal {S}}:=\cap _{r>0}\left\{ x\in M|\liminf _{n\rightarrow \infty }\int _{D^M_r(x)}|du_n|^2dvol\ge \overline{\epsilon }^2\right\} , \end{aligned}$$ (1.5) where \(D^M_r(x)=\{y\in M|\ dist(x,y)\le r\}\) denotes the geodesic ball in M and \(\overline{\epsilon }>0\) is a constant whose value will be given in (5.3). Then \({\mathcal {S}}\) is a finite set \(\{p_1,...,p_I\}\). By taking subsequences, \(\{u_n\}\) converges in \(W^{2,2}_{loc}(M {\setminus } {\mathcal {S}})\) to some limit map \(u_0\in W^{2,2}(M,N)\) with free boundary on K and there are finitely many bubbles: a finite set of harmonic spheres \(w_i^l:S^2\rightarrow N\), \(l=1,...,l_i\), and a finite set of harmonic disks \(w_i^k:D_1(0)\rightarrow N\), \(k=1,...,k_i\) with free boundaries on K, where \(l_i,\ k_i\ge 0\) and \(l_i+k_i\ge 1\), \(i=1,...,I\), such that $$\begin{aligned} \lim _{n\rightarrow \infty }E(u_n)=E(u_0)+\sum _{i=1}^I\sum _{l=1}^{l_i}E(w^l_i) +\sum _{i=1}^I\sum _{k=1}^{k_i}E(w^k_i), \end{aligned}$$ (1.6) and the image \(u_0(M)\cup _{i=1}^I\big (\cup _{l=1}^{l_i}(w^l_i(S^2)) \cup _{k=1}^{k_i}(w^k_i(D_1(0)))\big )\) is a connected set. Here, harmonic spheres are minimal spheres and harmonic disks with free boundary on K are minimal disks with free boundary on K. In contrast to the Dirichlet problem where, due to the pointwise boundary condition, no blow-up at the boundary is possible. Here, a blow-up may occur at the boundary and produce one or more harmonic disks with the same free boundary K as the original maps. We should also mention that the Plateau boundary condition for minimal surfaces can also be seen as a free boundary condition where the target set K is a Jordan curve. Here, the monotonicity condition and the three-point normalization that are usually imposed prevent a boundary blow-up, however, see [8] and the systematic discussion in [13]. Our results in the above theorem apply to some classical problems like minimal surfaces in Riemannian manifolds with free boundaries, harmonic functions with free boundary (c.f. [17]) as well as to pseudo holomorphic curves in sympletic manifolds with totally real boundary conditions and Lagrangian boundary conditions, c.f. [7, 12, 28, 51, 53] and to string theory where the free boundary represents a D-brane, c.f. [14]. The reason why we work with a sequence of maps with uniformly \(L^2\)-bounded tension fields and with free boundary is that we want to apply our results in Theorem 1.1 to the following heat flow for harmonic maps with free boundary: $$\begin{aligned} \partial _tu(x,t)= & {} \tau (u(x))\quad (x,t)\;\in M\times (0,T);\end{aligned}$$ (1.7) $$\begin{aligned} u(\cdot ,0)= & {} u_0(x)\quad x\in M;\end{aligned}$$ (1.8) $$\begin{aligned} u(x,t)\in & {} K, \quad a.e. \;\ x\in \partial M, \quad \forall \ t\ge 0;\end{aligned}$$ (1.9) $$\begin{aligned} du(x)(\overrightarrow{n})\perp & {} T_{u(x)}K, \quad \forall (x,t)\;\in \partial M\times (0,T). \end{aligned}$$ (1.10) The existence of a global weak solution of (1.7–1.10) with finitely many singularities was considered by Ma [27], following the pioneering works by Struwe [44, 45]. For higher dimensional cases, we refer to [2, 46]. For other work on the harmonic map flow with free boundary, see [19]. For the harmonic map flow with Dirichlet boundary condition, we refer to Chang [1]. Let \(u:M\times (0,\infty )\rightarrow N\) be a global weak solution to (1.7–1.10), which is smooth away from a finite number of singular points \(\{(x_i,t_i)\}\subset M\times (0,\infty )\). In this paper, we shall complete the qualitative picture at the singularities of this flow, where bubbles (nontrivial harmonic spheres or nontrivial harmonic disks with free boundary) split off. At infinite time, we have Theorem 1.2 There exist a harmonic map \(u_\infty :M\rightarrow N\) with free boundary in K, a finite number of bubbles \(\{\omega _i\}_{i=1}^m\) and sequences \(\{x^i_n\}_{i=1}^m\subset M\), \(\{\lambda ^i_n\}_{i=1}^m\subset {\mathbb {R}}_+\) and \(\{t_n\}\subset {\mathbb {R}}_+\) such that $$\begin{aligned} \lim _{t\nearrow \infty }E(u(\cdot ,t),M)=E(u_\infty ,M)+\sum _{i=1}^mE(\omega _i) \end{aligned}$$ (1.11) and $$\begin{aligned} \Vert u(\cdot ,t_n)-u_\infty (\cdot )-\sum _{i=1}^m\omega ^i_n(\cdot )\Vert _{L^\infty (M)}\rightarrow 0 \end{aligned}$$ (1.12) as \(n\rightarrow \infty \), where \(\omega ^i_n(\cdot )=\omega ^i\left( \frac{\cdot -x^i_n}{\lambda ^i_n}\right) -\omega _i(\infty )\). Here, (1.12) is equivalent to say that the image of weak limit \(u_\infty \) and bubbles \(\{\omega _i\}_{i=1}^m\) is a connected set as in Theorem 1.1. For finite time blow-ups, we have Theorem 1.3 For \(T_0<\infty \), let \(u\in C^\infty (M\times (0,T_0),N)\) be a solution to (1.7–1.10) with \(T_0\) as its singular time. Then there exist finite many bubbles \(\{\omega _i\}_{i=1}^l\) such that $$\begin{aligned} \lim _{t\nearrow T_0}E(u(\cdot ,t),M)=E(u(\cdot ,T_0),M)+\sum _{i=1}^lE(\omega _i). \end{aligned}$$ (1.13) To study the regularity or the qualitative behavior at the free boundary for approximate harmonic maps in this paper, we need some new observations. Firstly, we need to extend the solution across the free boundary as in the harmonic map case done by Scheven [39] and the main difficulty is to write the equation of the extended map into an elliptic system with an antisymmetric potential up to some transformation (see Proposition 3.3). Secondly, thanks to the free boundary condition, we can apply the Pohozaev's argument which was firstly introduced by Lin-Wang [24] for approximate harmonic maps, in the local region as \(D_r(x)\cap M\) with \(x\in \partial M\). See Lemma 4.3. This is crucial when we estimate the energy concentration in the neck domain. Thirdly, we have a finer observation of the neck domain. For the boundary blow-up point, the neck domains consist of some irregular half annulus. We will decompose these irregular neck domains into three parts as: interior parts, regular half annulus with the center points living on the boundary and the remaining parts. The first and third parts are easy to control due to the classical blow-up theory of (approximate) harmonic maps with interior blow-up points. In this paper, we focus on the energy concentration in the domains of the second parts. Since the extended map satisfies an elliptic system with an antisymmetric potential up to some transformation and with some error term F (see Proposition 3.3), one can utilize the idea in [18] (with \(F=0\)) with some modifications to get the energy identity. Here in the present paper, we shall adapt the methods in [5] developed for the interior bubbling case to get the energy identity and the no neck property in the free boundary case. To show the no neck property, namely, bubble tree convergence, we shall get the exponential decay of the energy by deriving a differential inequality on the neck region. This paper is organized as follows. In Sect. 2, we recall some classical results which will be used in this paper. In Sect. 3, we derive a new form of the elliptic system for the extended map after involution across the boundary which will allow us to turn the boundary regularity problem into an interior regularity problem. As a corollary of this boundary regularity result, we prove a removability theorem for singularities at the free boundary. In Sect. 4, using the new equation of the involuted map, we obtain the small energy regularity in the free boundary case. The gap theorem and Pohozaev's identity in the free boundary case will also be established. In Sect. 5, we prove the energy identity and no neck property at the free boundary by decomposing the neck domain into several parts including a half annulus centered at the boundary and then using the involuted map's equation. Combining this with the interior blow-up theory, we complete the proof of Theorem 1.1. In Sect. 6, we apply Theorem 1.1 to the harmonic map flow with free boundary and prove Theorem 1.2 and Theorem 1.3. Notation: \(D_r(x_0)\) denotes the closed ball of radius r and center \(x_0\) in \({\mathbb {R}}^2\). Denote $$\begin{aligned}&D^+_r(x_0):=\left\{ x=(x^1,x^2)\in D_r(x_0)|x^2\ge 0\right\} ,\\&D^-_r(x_0):=\left\{ x=(x^1,x^2)\in D_r(x_0)|x^2\le 0\right\} ,\\&\partial ^+ D_r(x_0):=\left\{ x=(x^1,x^2)\in \partial D_r(x_0)|x^2\ge 0\right\} ,\\&\partial ^- D_r(x_0):=\left\{ x=(x^1,x^2)\in \partial D_r(x_0)|x^2\le 0\right\} ,\\&\partial ^0 D^+_r(x_0)=\partial ^0 D^-_r(x_0):=\partial D^+_r(x_0){\setminus } \partial ^+ D_r(x_0). \end{aligned}$$ Let \(a\ge 0\) be a constant, denote $$\begin{aligned} {\mathbb {R}}^2_a:=\left\{ (x^1,x^2)|x^2\ge -a\right\} \quad \ and \;\ \ {\mathbb {R}}^{2+}_a:=\left\{ (x^1,x^2)|x^2> -a\right\} . \end{aligned}$$ For convenience, we denote \(D_r=D_r(0)\), \(D=D_1(0)\) and \({\mathbb {R}}^2_+={\mathbb {R}}^2_a\) when \(a=0\). Let \(T\subset \partial M\) be a smooth boundary portion, denote $$\begin{aligned} W^{k,p}_\partial (T)=\left\{ g\in L^1(T):g=G|_T \text{ for } \text{ some } G\in W ^{k,p}(M)\right\} \end{aligned}$$ with norm $$\begin{aligned} \Vert g\Vert _{W^{k,p}_\partial (T)}=\inf _{G\in W ^{k,p}(M),G|_{T}=g}\Vert G\Vert _{W ^{k,p}(M)}. \end{aligned}$$ In this paper, we use the notation \(\Delta _g\) (or \(\Delta _M\)) to denote the Laplace-Beltrami operator on the Riemannian manifold (M, g) and use \(\Delta :=\partial ^2_x+\partial ^2_y\) to denote the usual Laplace operator on \({\mathbb {R}}^2\). 2 Preliminary results In this section, we will recall some well known results that are useful for our problem. Firstly, we recall the interior small energy regularity result (see [6, 20]) which is firstly introduced in [38]. Lemma 2.1 Let \(u\in W^{2,p}(D,N)\) for some \(1<p\le 2\). There exist constants \(\epsilon _1=\epsilon _1(p,N)>0\) and \(C=C(p,N)>0\), such that if \(\Vert \nabla u\Vert _{L^2(D)}\le \epsilon _1\), then $$\begin{aligned} \Vert u-\frac{1}{\pi }\int _Du(x)dx\Vert _{W^{2,p}(D_{1/2})}\le C(p,N)(\Vert \nabla u\Vert _{L^p(D)}+\Vert \tau (u)\Vert _{L^p(D)}), \end{aligned}$$ (2.1) where \(\tau (u)\) is the tension field of u. Moreover, by the Sobolev embedding \(W^{2,p}({\mathbb {R}}^2)\subset C^0({\mathbb {R}}^2)\), we have $$\begin{aligned} \Vert u\Vert _{Osc(D_{1/2})}=\sup _{x,y\in D_{1/2}}|u(x)-u(y)|\le C(p,N)(\Vert \nabla u\Vert _{L^p(D)}+\Vert \tau (u)\Vert _{L^p(D)}).\nonumber \\ \end{aligned}$$ (2.2) Secondly, we recall a gap theorem for the case of a closed domain. Lemma 2.2 ([5]) There exists a constant \(\epsilon _0=\epsilon _0(M,N)>0\) such that if u is a smooth harmonic map from a closed Riemann surface M to a compact Riemannian manifold N and satisfying $$\begin{aligned} \int _M|\nabla u|^2dvol\le \epsilon _0, \end{aligned}$$ then u is a constant map. Thirdly, we state an interior removable singularity result. Theorem 2.3 ([22]) Let \(u:D{\setminus }\{0\}\rightarrow N\) be a \(W^{2,2}_{loc}(D{\setminus }\{0\})\) map with finite energy that satisfies $$\begin{aligned} \tau (u)=g\in L^2(D,TN),\quad x\in D{\setminus }\{0\}. \end{aligned}$$ Then u can be extended to a map in \(W^{2,2}(D,N)\). Next, combining the regularity results for critical elliptical systems with an antisymmetric structure developed by Rivière [33] and Rivière-Struwe [35] with various extensions in e.g. [34, 36, 37, 40, 41, 42, 54], we state the following Theorem 2.4 Let \(d \ge 2\), \(0\le s\le d\), \(0<\Lambda <\infty \) and \(1<p<2\). For any \(A\in L^{\infty }\cap W^{1,2}(D, GL(d))\), \(\Omega \in L^2(D,so(d)\otimes \wedge ^1 {\mathbb {R}}^m)\), \(f\in L^p(D,{\mathbb {R}}^d)\) and any \(u\in W^{1,2}(D,{\mathbb {R}}^d)\) weakly solving $$\begin{aligned} \mathrm{d}^{*}(A\mathrm{d}u)= & {} \langle \Omega , A \mathrm{d}u\rangle + f \quad \text {in}\; D, \end{aligned}$$ (2.3) with A satisfying $$\begin{aligned} \Lambda ^{-1}|\xi | \le |A(x)\xi | \le \Lambda |\xi | \quad \text {for a.e. }\;x\in D, \quad \text {for all }\;\xi \in {\mathbb {R}}^d, \end{aligned}$$ (2.4) we have \(u\in W^{2,p}_{loc}(D)\) and there exist \(\epsilon =\epsilon (d,\Lambda , p)>0\) and \(C=C(d,\Lambda , p)>0\) such that whenever \(\Vert \Omega \Vert _{L^2(D)}+ \Vert {\nabla }A\Vert _{L^2(D)}\le \epsilon \) then $$\begin{aligned} \Vert {\nabla }^2 u\Vert _{L^p\left( D_{\frac{1}{2}}\right) } + \Vert {\nabla }u\Vert _{L^{\frac{2p}{2-p}}\left( D_{\frac{1}{2}}\right) } \le C(\Vert u\Vert _{L^1(D)} + \Vert f\Vert _{L^p(D)} ). \end{aligned}$$ It is well known that the harmonic map equation can be written as a critical elliptical system with an antisymmetric structure and hence we have the following (which can also be proved by using classical methods developed for the harmonic map case, see e.g. [10]) Theorem 2.5 For every \(p\in (1,\infty )\) there exists an \(\epsilon >0\) with the following property. Suppose that \(u\in W^{1,2}(D;N)\) and \(f\in L^p(D;{\mathbb {R}}^N)\) satisfy $$\begin{aligned} \tau (u)=f \ \ in \ D \end{aligned}$$ weakly, then \(u\in W^{2,p}_{loc}(D)\). Finally, we recall the classical boundary estimates for the Laplace operator under Neumann boundary condition. Lemma 2.6 (see e.g. [50]) Let \(f\in W^{k,p}(M)\) and \(g\in W_\partial ^{k,p}(M)\) for some \(k\in {\mathbb {N}}_0\), \(1<p<\infty \). Assume that \(u\in W^{1,p}(M)\) weakly solves $$\begin{aligned} \Delta _M u=f \quad&in \; M;\\ \frac{\partial u}{\partial \overrightarrow{n}}=g\quad&on \; \partial M. \end{aligned}$$ Then \(u\in W^{k+2,p}(M)\) is a strong solution. Moreover, there exist constants \(C=C(M)>0\) and \(C'=C'(M)>0\) such that for all \(u\in W^{k+2,p}(M)\) $$\begin{aligned} \left\| u\right\| _{W^{k+2,p}\left( M\right) }&\le C\left( \left\| \Delta _M u\right\| _{W^{k,p}\left( M\right) }+\left\| \frac{\partial u}{\partial \overrightarrow{n}}\right\| _{W_\partial ^{k+1,p}\left( M\right) }+\left\| u\right\| _{L^p\left( M\right) }\right) ;\\ \left\| u\right\| _{W^{k+2,p}\left( M\right) }&\le C'\left( \left\| \Delta _M u\right\| _{W^{k,p}\left( M\right) }+\left\| \frac{\partial u}{\partial \overrightarrow{n}}\right\| _{W_\partial ^{k+1,p}\left( M\right) }\right) , \quad if \; \int _Mu=0. \end{aligned}$$ 3 Regularity at the free boundary In this section, we will prove a regularity theorem for weak solutions of (1.4) and (1.1) with \(F\in L^p(M,{\mathbb {R}}^N)\) for some \(p>1\) where \(F(x)\in T_{u(x)}N\) for \(a.e.\ x\in M\). As an application, we derive the removability theorem for a local singularity at the free boundary. We first need to define weak solutions of (1.4) and (1.1). Definition 3.1 \(u\in H^1(M,N)\) is called a weak solution to (1.4) and (1.1) if \(u(\partial M)\subset K\) a.e. and $$\begin{aligned} -\int _M\nabla u\cdot \nabla \varphi dvol=\int _M F\cdot \varphi dvol \end{aligned}$$ for any vector field \(\varphi \in L^\infty \cap H^1(M,TN)\) that is tangential along u and satisfies the boundary condition \(\varphi (x)\in T_{u(x)}K\) for a.e. \(x\in \partial M\). We also say \(u\in H^1(M,N)\) is a weak solution of (1.4) with free boundary \(u(\partial M)\) on K. For a weakly harmonic map with free boundary (\(i.e.\ F=0\)), it is shown that the image of the map is contained in a small tubular neighborhood of K if the energy of the map is small, see Lemma 3.1 in [39]. The proof there requires the interior \(L^{\infty }\)-estimate for the gradient of the map. Here, we extend this localization property to the more general case of weak solutions of (1.4) with \(F\in L^p(D^+)\) for some \(1<p\le 2\) and derive certain oscillation estimate for the solution. In our case, there is in general no interior \(L^{\infty }\)-estimate for the gradient of the map. Lemma 3.2 Let \(F\in L^p(D^+)\) for some \(1<p\le 2\) and \(u\in W^{1,2}(D^+,N)\) be a weak solution of (1.4) with free boundary \(u(\partial ^0D^+)\) on K. Then there exists positive constants \(C=C(p,N)\), \(\epsilon _2=\epsilon _2(p,N)\), such that if \(\Vert \nabla u\Vert _{L^2(D^+)}\le \epsilon _2\), then $$\begin{aligned} dist(u(x),K)\le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)})\ for\ all\ x\in D_{1/2}^+. \end{aligned}$$ (3.1) Moreover, we have $$\begin{aligned} Osc_{D^+_{\frac{1}{4}}}u:=\sup _{x,y\in D^+_{\frac{1}{4}}}|u(x)-u(y)|\le C(p,N)\left( \Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}\right) .\qquad \end{aligned}$$ (3.2) Proof We shall follow the scheme of the proof of Lemma 3.1 in [39]. Take \(\epsilon _2=\min \{\epsilon _1,\epsilon \}\) where \(\epsilon _1\) and \(\epsilon \) are the corresponding constants in Lemma 2.1 and Theorem 2.5. By the interior regularity result Theorem 2.5, we know \(u\in W^{2,p}_{loc}(D^+{\setminus } \partial D^+)\). For any \(x_0\in D_{1/2}^+{\setminus } \partial ^0D^+\), set \(R=\frac{1}{3}dist(x_0,\partial ^0 D^+)\) and suppose \(x_1\in \partial ^0D^+\) is the nearest point to \(x_0\), i.e. \(|x_0-x_1|=dist(x_0,\partial ^0 D^+)=3R\). Let \(G_{x_0}\) be the fundamental solution of the Laplace operator with singularity at \(x_0\) which satisfies $$\begin{aligned} |\nabla G_{x_0}|\le C(n)|x-x_0|^{-1} \quad \text{ for } \text{ all } \; x\in {\mathbb {R}}^2. \end{aligned}$$ Setting \(w(x)=u(x)-{\overline{u}}\) where \({\overline{u}}:=\frac{1}{|D^+_{5R}(x_1)|}\int _{D^+_{5R}(x_1)}udx\) and choosing a cut-off function \(\eta \in C_0^\infty (D_{2R}(x_0))\) such that \(0\le \eta \le 1\), \(\eta |_{D_R(x_0)}\equiv 1\) and \(|\nabla \eta |\le \frac{C}{R}\), by Green's representation theorem and integrating by parts, we have $$\begin{aligned} |w(x_0)|^2&=-\int _{D_{2R}(x_0)}\nabla G_{x_0}(x)\nabla (|w|^2\eta ^2)dx\nonumber \\&\le C\int _{D_{2R}(x_0)}|\nabla G_{x_0}(x)||w\nabla w|\eta ^2dx+C\int _{D_{2R}(x_0){\setminus } D_{R}(x_0)}|\nabla G_{x_0}(x)||w|^2|\nabla \eta |dx\nonumber \\&\le C\Vert w\Vert _{L^\infty (D_{2R}(x_0))}\int _{D_{2R}(x_0)}|\nabla G_{x_0}(x)||\nabla u|dx+CR^{-2}\int _{D_{2R}(x_0){\setminus } D_{R}(x_0)}|w|^2dx\nonumber \\&\le C\Vert w\Vert _{L^\infty (D_{2R}(x_0))}\Vert \nabla G_{x_0}(x)\Vert _{L^{\frac{q}{q-1}}(D_{2R}(x_0))}\Vert \nabla u\Vert _{L^{q}(D_{2R}(x_0))}+CR^{-2}\int _{D_{2R}(x_0)}|w|^2dx\nonumber \\&:={\mathbb {I}}+\mathbb {II}, \end{aligned}$$ (3.3) where \(2<q=\frac{p}{2-p}<\frac{2p}{2-p}\) if \(1<p<2\) and \(q=4\) if \(p=2\). According to Lemma 2.1, we have $$\begin{aligned} R^{1-\frac{2}{s}}\Vert \nabla u\Vert _{L^{s}(D_{2R}(x_0))}+\Vert u\Vert _{Osc(D_{2R}(x_0))}&\le C(s,p,N)\left( \Vert \nabla u\Vert _{L^2(D_{3R}(x_0))}+R^{1-\frac{1}{p}}\Vert F\Vert _{L^p(D_{3R}(x_0))}\right) \nonumber \\&\le C(s,p,N)\left( \Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}\right) \end{aligned}$$ (3.4) for any \(2<s<\frac{2p}{2-p}\). Thus, we obtain $$\begin{aligned} {\mathbb {I}}&\le C(p,N)\frac{\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}}{R^{1-2/q}}\Vert w\Vert _{L^\infty (D_{2R}(x_0))}\left\| \frac{1}{|x-x_0|}\right\| _{L^{\frac{q}{q-1}}(D_{2R}(x_0))}\\&\le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)})\Vert w\Vert _{L^\infty (D_{2R}(x_0))}\\&\le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)})(|w(x_0)|+\Vert u\Vert _{Osc(D_{2R}(x_0))})\\&\le \frac{1}{2}|w(x_0)|^2+C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)})^2. \end{aligned}$$ Combining the Poincaré inequality with the fact \(D_{2R}(x_0)\subset D^+_{5R}(x_1)\subset D^+\), we get $$\begin{aligned} \mathbb {II}\le CR^{-2}\int _{D^+_{5R}(x_1)}|w|^2dx\le C\int _{D^+_{5R}(x_1)}|\nabla u|^2dx. \end{aligned}$$ So, we have $$\begin{aligned} |u(x_0)-{\overline{u}}|\le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}). \end{aligned}$$ (3.5) Set \(d(y):=dist(y,K)\) for \(y\in N\), then we have $$\begin{aligned} d({\overline{u}})\le d(u(x))+|u(x)-{\overline{u}}|. \end{aligned}$$ Integrating the above inequality, we get $$\begin{aligned} d\left( {\overline{u}}\right)&\le \frac{1}{|D_{5R}^+\left( x_1\right) |}\int _{D_{5R}^+\left( x_1\right) }d\left( u\left( x\right) \right) dx +\frac{1}{|D_{5R}^+\left( x_1\right) |}\int _{D_{5R}^+\left( x_1\right) }|u\left( x\right) -{\overline{u}}|dx\\&\le C\left( \int _{D_{5R}^+\left( x_1\right) }|\nabla \left( d\left( u\left( x\right) \right) \right) |^2dx\right) ^{1/2} +C\left( \int _{D_{5R}^+\left( x_1\right) }|\nabla u|^2dx\right) ^{1/2}\\&\le C\left( \int _{D_{5R}^+\left( x_1\right) }|\nabla u|^2dx\right) ^{1/2}\le C\Vert \nabla u\Vert _{L^2\left( D^+\right) }, \end{aligned}$$ where the second inequality follows from the Poincaré inequality since \(d(u(x))=0\) on \(\partial ^0D_{5R}^+(x_1)\) and the third inequality follows from the fact that \(Lip(d)=1\). Then, we have $$\begin{aligned} dist(u(x_0),K)\le dist({\overline{u}},K)+|u(x_0)-{\overline{u}}|\le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}), \end{aligned}$$ which implies (3.1) holds. For (3.2), taking \(x_0=\left( 0,\frac{1}{2}\right) \in D^+_{\frac{1}{2}}{\setminus } \partial ^0D^+\) in (3.5), then \(x_1=0\), \(R=\frac{1}{3}|x_0-x_1|=\frac{1}{6}\) and we get $$\begin{aligned} \left| u\left( 0,\frac{1}{2}\right) -\frac{1}{\left| D^+_{\frac{5}{6}}(0)\right| }\int _{D^+_{\frac{5}{6}}(0)}udx\right| \le C(p,N)(\Vert \nabla u\Vert _{L^2(D^+)}+\Vert F\Vert _{L^p(D^+)}).\qquad \end{aligned}$$ (3.6) For any \(y_0\in D^+_{\frac{1}{4}}{\setminus } \partial ^0D^+\), set \(R_{y_0}=\frac{1}{3}dist(y_0,\partial ^0 D^+)\) and suppose \(y_1\in \partial ^0D^+\) is the nearest point to \(y_0\), i.e. \(|y_0-y_1|=dist(y_0,\partial ^0 D^+)=3R_{y_0}\). Combing (3.5) with (3.6), we obtain that $$\begin{aligned} \left| u\left( y_0\right) -u\left( 0,\frac{1}{2}\right) \right|&\le \left| u\left( y_0\right) -\frac{1}{\left| D^+_{5R_{y_0}}\left( y_1\right) \right| }\int _{D^+_{5R_{y_0}}\left( y_1\right) }udx\right| \\&\quad + \left| u\left( 0,\frac{1}{2}\right) -\frac{1}{\left| D^+_{\frac{5}{6}}\left( 0\right) \right| }\int _{D^+_{\frac{5}{6}}\left( 0\right) }udx\right| \\&\quad + \left| \frac{1}{\left| D^+_{5R_{y_0}}\left( y_1\right) \right| }\int _{D^+_{5R_{y_0}}\left( y_1\right) }udx-\frac{1}{\left| D^+_{\frac{5}{6}}\left( 0\right) \right| }\int _{D^+_{\frac{5}{6}}\left( 0\right) }udx\right| \\&\le C\left( p,N\right) \left( \Vert \nabla u\Vert _{L^2\left( D^+\right) }+\Vert F\Vert _{L^p\left( D^+\right) }\right) \\&\quad + \left| \frac{1}{\left| D^+_{5R_{y_0}}\left( y_1\right) \right| }\int _{D^+_{5R_{y_0}}\left( y_1\right) }udx-\frac{1}{\left| D^+_{\frac{5}{6}}\left( 0\right) \right| }\int _{D^+_{\frac{5}{6}}\left( 0\right) }udx\right| . \end{aligned}$$ Noting that \(D^+_{5R_{y_0}}(y_1)\subset D^+_{\frac{5}{6}}(0)\), by a variant of the classical Poincaré inequality, we have $$\begin{aligned}&\left| \frac{1}{\left| D^+_{5R_{y_0}}\left( y_1\right) \right| }\int _{D^+_{5R_{y_0}}\left( y_1\right) }udx-\frac{1}{\left| D^+_{\frac{5}{6}}\left( 0\right) \right| }\int _{D^+_{\frac{5}{6}}\left( 0\right) }udx\right| \\&\quad \le \frac{1}{\left| D^+_{\frac{5}{6}}\left( 0\right) \right| } \int _{D^+_{\frac{5}{6}}\left( 0\right) }\left| u-\frac{1}{\left| D^+_{5R_{y_0}}\left( y_1\right) \right| }\int _{D^+_{5R_{y_0}}\left( y_1\right) }udx\right| dx\le C\Vert \nabla u\Vert _{L^2\left( D^+_{\frac{5}{6}}\left( 0\right) \right) }\\&\quad \le C\Vert \nabla u\Vert _{L^2\left( D^+_1\left( 0\right) \right) }. \end{aligned}$$ Therefore, $$\begin{aligned} Osc_{D^+_{\frac{1}{4}}}u:=\sup _{x,y\in D^+_{\frac{1}{4}}}|u\left( x\right) -u\left( y\right) |&\le \left| u\left( x\right) -u\left( 0,\frac{1}{2}\right) \right| +\left| u\left( y\right) -u\left( 0,\frac{1}{2}\right) \right| \\&\le C\left( p,N\right) \left( \Vert \nabla u\Vert _{L^2\left( D^+\right) }+\Vert F\Vert _{L^p\left( D^+\right) }\right) . \end{aligned}$$ Thus, the lemma follows immediately. \(\square \) With the help of Lemma 3.2, we can extend the map to the whole disc D by involuting. Firstly, we consider \(1\le k\le n-1\). Without loss of generality, we may assume \(K\cap \partial N=\emptyset \) in this paper. In fact, if \(K\cap \partial N\ne \emptyset \), we extend the target manifold N smoothly across the boundary to another compact Riemannian manifold \(N'\), such that \(N\subset N'\) and \(K\cap \partial N'=\emptyset \). Then we can consider \(N'\) as a new target manifold. Denote by \(K_{\delta _0}\) the \(\delta _0\)-tubular neighborhood of K in N. Taking \(\delta _0>0\) small enough, then for any \(y\in K_{\delta _0}\), there exists a unique projection \(y'\in K\). Set \({\overline{y}}=exp_{y'}\{-exp^{-1}_{y'}y\}\). So we may define an involution \(\sigma \), i.e. \(\sigma ^2=Id\) as in [8, 9, 39] by $$\begin{aligned} \sigma (y)={\overline{y}} \quad for \; y\in K_{\delta _0}. \end{aligned}$$ Then it is easy to check that the linear operator \(D\sigma :TN|_{K_{\delta _0}}\rightarrow TN|_{K_{\delta _0}}\) satisfies \(D\sigma (V)=V\) for \(V\in TK\) and \(D\sigma (\xi )=-\xi \) for \(\xi \in T^\perp K\). Let \(F\in L^p(D_2^+)\) for some \(1<p\le 2\) and \(u\in W^{1,2}(D_2^+,N)\) be a weak solution of (1.4) with free boundary \(u(\partial ^0D_2^+)\) on K. If \(\Vert \nabla u\Vert _{L^2(D_2^+)}+\Vert F\Vert _{L^p(D_2^+)}\le \epsilon _3\) where \(\epsilon _3=\epsilon _3(p,N,\delta _0)>0\) is small, by the oscillation estimate (3.2) in Lemma 3.2, we know $$\begin{aligned} u\left( D^+\right) \subset B^N_{C\epsilon _3}\left( u\left( 0,\frac{1}{2}\right) \right) \subset K_{\delta _0}, \end{aligned}$$ (3.7) where \(B^N_{C\epsilon _3}(u(0,\frac{1}{2}))\) is the geodesic ball in N with the center point \(u(0,\frac{1}{2})\) and radius \( C\epsilon _3\). Then we can define an extension of u to \(D_1(0)\) that $$\begin{aligned} {\widehat{u}}(x)= {\left\{ \begin{array}{ll} u(x),\quad &{}if \; x\in D^+;\\ \sigma (u(\rho (x))),\quad &{}if \; x\in D^-, \end{array}\right. } \end{aligned}$$ (3.8) where \(\rho (x)=(x^1,-x^2)\) for \(x=(x^1,x^2)\in D_1(0)\). For \(k=n\), we also use the above extension by replacing \(\sigma =Id\). In the following part of this paper, we always state the argument for \(1\le k\le n-1\), since \(k=n\) is similar and easier. At this point, one can derive the regularity at the free boundary for weak solutions of (1.4) by applying classical methods in [8, 39] for harmonic maps or the method in [3, 43] which combines the method of moving frame and some modification of Rivière-Struwe's method in [35]. Now, we shall give our alternative approach which is also based on some extension of Rivière-Struwe's result. In order to derive the equation of the involuted map \({\widehat{u}}\), we shall first define $$\begin{aligned} P:B^N_{\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) \subset K_{\delta _0}&\rightarrow GL\left( {\mathbb {R}}^N,{\mathbb {R}}^N\right) =GL\left( T{\mathbb {R}}^N,T{\mathbb {R}}^N\right) \end{aligned}$$ by $$\begin{aligned} P(y)\xi = D\sigma (y)\xi ^{\top }(y)+\sum _{l={n+1}}^N\langle \xi ,\nu _l(y)\rangle \nu _l(\sigma (y)), \end{aligned}$$ (3.9) where \(\delta _1=\delta _1(N)\) is small such that \(B^N_{4\delta _1}(u(0,\frac{1}{2}))\subset K_{\delta _0}\) and there exists a local orthonormal basis \(\{\nu _l\}_{l=n+1}^N\) of the normal bundle \(T^{\bot }N|_{B^N_{4\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) }\), \(\xi ^{\top }(y)\) is the projection map of \({\mathbb {R}}^N\rightarrow T_{y}N\). On one hand, Lemma 3.2 tells us that \(dist(u(0,\frac{1}{2}), K)\le C\epsilon _3\) which implies \(\sigma \left( B^N_{\delta _1}(u(0,\frac{1}{2}))\right) \subset B^N_{4\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) \) if we take \(\epsilon _3\) small enough (e.g. \(C\epsilon _3\le \delta _1\)). Thus, (3.9) is well defined. On the other hand, noting that since (3.7) holds, if \(\epsilon _3\) is small enough (e.g. \(4C\epsilon _3\le \delta _1\)), then we know that \({\widehat{u}}(D)\subset B^N_{4C\epsilon _3}\left( u\left( 0,\frac{1}{2}\right) \right) \subset B^N_{\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) \) and the notations \(P({\widehat{u}}(x))\), \(O({\widehat{u}}(x))\) in the sequel (see below) are well defined. It is easy to check that P(y) is invertible linear operator for any \(y\in B^N_{\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) \), since the linear operator \(D\sigma (y)\) is invertible. For simplicity, we still denote by P(y) the matrix corresponding to the linear operator P(y) under the standard orthonormal basis of \({\mathbb {R}}^N\). Moreover, the matrix P(y) and its inverse matrix \(P^{-1}(y)\) are smooth for \(y\in B^N_{\delta _1}\left( u\left( 0,\frac{1}{2}\right) \right) \). So, there exists an orthogonal matrix O(y) which is also smooth, such that $$\begin{aligned} O^TP^TPO=\Xi :=\begin{pmatrix} \lambda _1(y) &{} 0 &{} 0 \\ 0 &{} \ddots &{} 0 \\ 0 &{} 0 &{} \lambda _N(y) \end{pmatrix} \end{aligned}$$ where \(P^T\) is the transposed matrix and \(\lambda _i(y)\), \(i=1,...,N\) is the eigenvalues of the positive symmetric matrix \(P^T(y)P(y)\). It is easy to see that \(\lambda _i(y)=1\) for \(y\in K\), \(i=1,...,N\). Define $$\begin{aligned} \rho '(x)= {\left\{ \begin{array}{ll} x,\ x\in D^+;\\ \rho (x),\ x\in D^-, \end{array}\right. } \quad and \quad \sigma '({\widehat{u}}(x))= {\left\{ \begin{array}{ll} {\widehat{u}}(x),\ x\in D^+;\\ \sigma ({\widehat{u}}(x)),\ x\in D^-, \end{array}\right. } \end{aligned}$$ and the matrixes $$\begin{aligned} Q=Q(x)= {\left\{ \begin{array}{ll} Id_{N\times N},\ x\in D^+;\\ P({\widehat{u}}(x)),\ x\in D^-, \end{array}\right. } \quad and \quad {\widetilde{Q}}={\widetilde{Q}}(x)= {\left\{ \begin{array}{ll} Id_{N\times N},\ x\in D^+;\\ O({\widehat{u}})\sqrt{\Xi }({\widehat{u}})O^T({\widehat{u}}),\ x\in D^-, \end{array}\right. } \end{aligned}$$ where $$\begin{aligned} \sqrt{\Xi }(y)=\begin{pmatrix} \sqrt{\lambda _1(y)} &{} 0 &{} 0 \\ 0 &{} \ddots &{} 0 \\ 0 &{} 0 &{} \sqrt{\lambda _N(y)} \end{pmatrix}. \end{aligned}$$ One can easily find that \({\widetilde{Q}}\in L^\infty \cap W^{1,2}(D,{\mathbb {R}}^N)\) and is invertible. The involuted map satisfies the following proposition: Proposition 3.3 Let \(F\in L^p(D_2^+)\) for some \(1<p\le 2\) and \(u(x)\in W^{1,2}(D_2^+)\) be a weak solution of (1.4) with free boundary \(u(\partial ^0D_2^+)\) on K. There exists a positive constant \(\epsilon _3=\epsilon _3(p,N)\), such that if \(\Vert \nabla u\Vert _{L^2(D_2^+)}+\Vert F\Vert _{L^p(D_2^+)}\le \epsilon _3\) and \({\widehat{u}}\) is defined as above, then \({\widehat{u}}\in W^{1,2}(D)\) is a weak solution of $$\begin{aligned} div({\widetilde{Q}}\cdot \nabla {\widehat{u}}(x))= \Omega \cdot {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)+{\widetilde{Q}}^{-1}\cdot Q^T\cdot F(\rho '(x)), \ x\in D, \end{aligned}$$ (3.10) where $$\begin{aligned} \Omega (x)= {\left\{ \begin{array}{ll} \Omega _2(x),\ x\in D^+;\\ \Omega _1({\widehat{u}}(x))+\Omega _2(x)-{\widetilde{Q}}^{-1}\cdot \frac{1}{2}(Q^T\nabla Q-\nabla Q^TQ)\cdot {\widetilde{Q}}^{-1},\ x\in D^-, \end{array}\right. } \end{aligned}$$ and $$\begin{aligned} \Omega _1= & {} (\Omega _1)_{AB}:=\nabla OO^T+\frac{1}{2}O\sqrt{\Xi }\nabla O^TO\sqrt{\Xi }^{-1}O^T-\frac{1}{2}O\sqrt{\Xi }^{-1}O^T \nabla O\sqrt{\Xi }O^T,\\ \Omega _2= & {} (\Omega _2)_{AB}\\:= & {} {\widetilde{Q}}\cdot Q^{-1}\cdot \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\cdot \nu ^T_l({\widehat{u}})\cdot {\widetilde{Q}}^{-1}-{\widetilde{Q}}^{-1}\cdot \nu _l({\widehat{u}}) \cdot \nabla \big (\nu ^T_l(\sigma '({\widehat{u}}))\big )\cdot (Q^{-1})^T\cdot {\widetilde{Q}} , \end{aligned}$$ in the distribution sense. Here, \(\Omega (x)\), \(\Omega _1(x)\) and \(\Omega _2(x)\) are antisymmetric matrices in \(L^2(D)\). Moreover, if \(u\in W^{2,p}(D^+)\), \(1<p\le 2\), then \({\widehat{u}}\in W^{2,p}(D)\) and satisfies $$\begin{aligned} \triangle {\widehat{u}}+\Upsilon _{{\widehat{u}}}(\nabla {\widehat{u}},\nabla {\widehat{u}})={\widehat{F}}\quad in \quad D, \end{aligned}$$ (3.11) where \(\Upsilon _{{\widehat{u}}}(\cdot ,\cdot )\) is a bounded bilinear form and \({\widehat{F}}\in L^p(D)\) which are defined by (3.21), satisfying $$\begin{aligned} |\Upsilon _{{\widehat{u}}}(\nabla {\widehat{u}},\nabla {\widehat{u}})|\le C(N)|\nabla {\widehat{u}}|^2\ \ and \ \ \Vert {\widehat{F}}\Vert _{L^p(D)}\le C(N)\Vert F\Vert _{L^p(D^+)}. \end{aligned}$$ Proof Step 1 Firstly, it is easy to see that \({\widehat{u}}\in W^{1,2}(D)\). Secondly, we prove that for any arbitrary test vector field \(V\in L^\infty \cap W_0^{1,2}(D,TN)\) with \(V(x)\in T_{{\widehat{u}}(x)}N\) for a.e. \(x\in D\), there holds $$\begin{aligned} -\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx=\int _D F(\rho '(x))\cdot Q \cdot Vdx. \end{aligned}$$ (3.12) Set \(\Sigma (x):=D\sigma |_{{\widehat{u}}(x)}\) for \(x\in D\). We decompose V into the symmetric and anti-symmetric part with respect to \(\sigma \) as in [39], i.e. \(V=V_e+V_a\), where $$\begin{aligned} V_e(x):=\frac{1}{2}\{V(x)+\Sigma (\rho (x))V(\rho (x))\},\ V_a(x):=\frac{1}{2}\{V(x)-\Sigma (\rho (x))V(\rho (x))\}. \end{aligned}$$ Since \(\sigma ^2=Id\), we have \(\Sigma (x)\Sigma (\rho (x))=Id\). Then, $$\begin{aligned} V_e(\rho (x))=\Sigma (x)V_e(x)\ \ and \ \ V_a(\rho (x))=-\Sigma (x)V_a(x). \end{aligned}$$ Noting \(D\sigma :TN|_{K_{\delta _0}}\rightarrow TN|_{K_{\delta _0}}\) satisfying \(D\sigma (V)=V\) for \(V\in TK\) and \(D\sigma (\xi )=-\xi \) for \(\xi \in T^\perp K\), for any \(x\in \partial ^0 D^+\), we know $$\begin{aligned} V_e(x)=\frac{1}{2}\{V(x)+\Sigma (x)V(x)\}=\Pi _{TK}V(x)\in TK \end{aligned}$$ where \(\Pi _{TK}:TN\rightarrow TK\) is the orthogonal projection. Since u is a weak solution of (1.4) in \(D^+\), we have $$\begin{aligned} -\int _{D^+}\nabla u(x)\nabla V_e(x)dx=\int _{D^+}F(x)\cdot V_e(x)dx. \end{aligned}$$ (3.13) Thus, $$\begin{aligned} -\int _{D^{-}}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V_e(x))dx&= -\int _{D^{-}}D\sigma |_{{\widehat{u}}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla (D\sigma |_{{\widehat{u}}} \cdot V_e(x))dx\nonumber \\&=-\int _{D^{-}}\nabla (u(\rho (x)))\cdot \nabla (\Sigma (x) \cdot V_e(x))dx\nonumber \\&=-\int _{D^{-}}\nabla (u(\rho (x)))\cdot \nabla ( V_e(\rho (x)))dx\nonumber \\&=-\int _{D^+}\nabla u(x)\nabla V_e(x)dx\nonumber \\&=\int _{D^+}F(x)\cdot V_e(x)dx= \int _{D^-}F(\rho '(x))\cdot Q \cdot V_e(x) dx. \end{aligned}$$ (3.14) Moreover, there holds $$\begin{aligned} -\int _{D^{-}}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V_a(x))dx&= -\int _{D^{-}}D\sigma |_{{\widehat{u}}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla (D\sigma |_{{\widehat{u}}} \cdot V_a(x))dx\nonumber \\&=\int _{D^{-}}\nabla (u(\rho (x)))\cdot \nabla ( V_a(\rho (x)))dx\nonumber \\&=\int _{D^+}\nabla u(x)\nabla V_a(x)dx, \end{aligned}$$ (3.15) and $$\begin{aligned} \int _D F(\rho '(x))\cdot Q \cdot V_a(x)dx&=\int _{D^+} F(x) \cdot V_a(x)dx+\int _{D^-} F(\rho '(x))\cdot Q \cdot V_a(x)dx\nonumber \\&=\int _{D^+} F(x) \cdot V_a(x)dx-\int _{D^-} F(\rho '(x))\cdot V_a(\rho '(x))dx\nonumber \\&=\int _{D^+} F(x) \cdot V_a(x)dx-\int _{D^+} F(x) \cdot V_a(x)dx=0. \end{aligned}$$ (3.16) Then (3.13), (3.14), (3.15) and (3.16) imply (3.12) immediately. Step 2 We claim: for any \(V\in L^\infty \cap W_0^{1,2}(D,{\mathbb {R}}^N)\), there holds $$\begin{aligned}&-\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx\nonumber \\&\quad =-\int _{D}\langle Q\cdot \nabla {\widehat{u}}(x), \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle \nu _l({\widehat{u}}), V\rangle dx+\int _D F(\rho '(x))\cdot Q \cdot V dx. \end{aligned}$$ (3.17) In fact, on the one hand, by (3.12), we get $$\begin{aligned} -\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx&=-\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\top )dx\\&\quad -\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx\\&=\int _D F(\rho '(x))\cdot Q \cdot V^\top dx\\&\quad -\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx. \end{aligned}$$ On the other hand, we have $$\begin{aligned} -\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx&= -\int _{D^+}\nabla u(x)\cdot \nabla V^\bot dx\\&\quad -\int _{D^-}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx\\&={\mathbb {I}}+\mathbb {II}. \end{aligned}$$ Computing directly, we have $$\begin{aligned} {\mathbb {I}}&=-\int _{D^+}\nabla u(x)\cdot \nabla (\langle V, \nu _l\rangle \nu _l)dx=-\int _{D^+}\nabla u(x)\cdot \langle V, \nu _l\rangle \nabla \nu _ldx\\&=-\int _{D^+}\langle Q\cdot \nabla {\widehat{u}}(x), \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle V, \nu _l({\widehat{u}})\rangle dx \end{aligned}$$ and $$\begin{aligned} \mathbb {II}&=-\int _{D^-}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx\\&=-\int _{D^-}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla \big (Q\cdot \langle V,\nu _l({\widehat{u}})\rangle \nu _l({\widehat{u}})\big )dx\\&=-\int _{D^-}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla \big ( \langle V,\nu _l({\widehat{u}})\rangle \nu _l(\sigma '({\widehat{u}}))\big )dx\\&=-\int _{D^-}\langle Q\cdot \nabla {\widehat{u}}(x), \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle V, \nu _l({\widehat{u}})\rangle dx. \end{aligned}$$ Combining these equations, we obtain $$\begin{aligned}&-\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V^\bot )dx=-\int _{D}\langle Q\cdot \nabla {\widehat{u}}(x), \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle \nu _l({\widehat{u}}), V\rangle dx. \end{aligned}$$ (3.18) Thus, we have $$\begin{aligned}&-\int _DQ\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx\\&\quad =-\int _{D}\langle Q\cdot \nabla {\widehat{u}}(x), \nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle \nu _l({\widehat{u}}), V\rangle dx+\int _D F(\rho '(x))\cdot Q \cdot V dx, \end{aligned}$$ where the equality follows from that \(F(\rho '(x))\in T_{u(\rho '(x))}N=T_{\sigma '({\widehat{u}})}N\). This is (3.17). Step 3 In order to prove \({\widehat{u}}\) is a weak solution of (3.10), take an arbitrary test vector field \(V\in L^\infty \cap W_0^{1,2}(D,{\mathbb {R}}^N)\), since the matrix \({\widetilde{Q}},{\widetilde{Q}}^{-1}\in L^\infty \cap W^{1,2}(D,{\mathbb {R}}^N)\), it is sufficient to prove $$\begin{aligned} -\int _D{\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla ({\widetilde{Q}} \cdot V)dx&=\int _{D}\langle \Omega \cdot {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)+{\widetilde{Q}}^{-1}\cdot Q^T\cdot F(\rho '(x)) ,{\widetilde{Q}}\cdot V\rangle dx \nonumber \\&=-\int _{D}\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega \cdot {\widetilde{Q}}\cdot V\rangle dx \nonumber \\&\quad +\int _DF(\rho '(x))\cdot Q \cdot V dx. \end{aligned}$$ (3.19) Computing directly, we get $$\begin{aligned}&-\int _{D^-}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx\\&\quad =-\int _{D^-}\left\langle Q^TQ\cdot \nabla {\widehat{u}}(x),\nabla V\right\rangle dx\\&\qquad -\int _{D^-}\left\langle \nabla {\widehat{u}}(x), Q^T\nabla Q \cdot V\right\rangle dx\\&\quad =-\int _{D^-}\left\langle O\sqrt{\Xi }O^T\cdot \nabla {\widehat{u}}(x), O\sqrt{\Xi }O^T\cdot \nabla V\right\rangle dx\\&\qquad -\int _{D^-}\left\langle \nabla {\widehat{u}}(x), \frac{1}{2}\nabla (Q^TQ) \cdot V\right\rangle dx\\&\qquad -\int _{D^-}\left\langle \nabla {\widehat{u}}(x), \frac{1}{2}(Q^T\nabla Q-\nabla Q^TQ) \cdot V\right\rangle dx\\&\quad =-\int _{D^-}\left\langle O\sqrt{\Xi }O^T\cdot \nabla {\widehat{u}}(x), \nabla (O\sqrt{\Xi }O^T\cdot V)\right\rangle dx\\&\qquad -\int _{D^-}\left\langle \nabla {\widehat{u}}(x), \frac{1}{2}(Q^T\nabla Q-\nabla Q^TQ) \cdot V\right\rangle dx\\&\qquad +\int _{D^-}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \left( \nabla (O\sqrt{\Xi }O^T)-{\widetilde{Q}}^{-1}\cdot \frac{1}{2}\nabla (Q^TQ)\right) \cdot V\right\rangle dx, \end{aligned}$$ and $$\begin{aligned}&\left( \nabla (O\sqrt{\Xi }O^T)-{\widetilde{Q}}^{-1}\cdot \frac{1}{2}\nabla (Q^TQ)\right) \cdot {\widetilde{Q}}^{-1}\\&\quad =\nabla OO^T+\frac{1}{2}O\sqrt{\Xi }\nabla O^TO\sqrt{\Xi }^{-1}O^T-\frac{1}{2}O\sqrt{\Xi }^{-1}O^T \nabla O\sqrt{\Xi }O^T:=\Omega _1, \end{aligned}$$ where \(\Omega _1\) is an antisymmetric matrix since \(O^TO=OO^T=Id\). Noting that \(Q(x)={\widetilde{Q}}(x)\), \(x\in D^+\), thus, we have $$\begin{aligned}&-\int _{D}Q\cdot \nabla {\widehat{u}}(x)\cdot \nabla (Q \cdot V)dx\\&\quad =-\int _D{\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla ({\widetilde{Q}} \cdot V)dx-\int _{D^-}\left\langle \nabla {\widehat{u}}(x), \frac{1}{2}(Q^T\nabla Q-\nabla Q^TQ) \cdot V\right\rangle dx\\&\qquad +\int _{D^-}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega _1\cdot {\widetilde{Q}} \cdot V\right\rangle dx. \end{aligned}$$ By (3.17), we get $$\begin{aligned}&-\int _D{\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla ({\widetilde{Q}} \cdot V)dx\nonumber \\&\quad =\int _{D^-}\left\langle \nabla {\widehat{u}}(x), \frac{1}{2}(Q^T\nabla Q-\nabla Q^TQ) \cdot V\right\rangle dx -\int _{D^-}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega _1\cdot {\widetilde{Q}} \cdot V\right\rangle dx\nonumber \\&\qquad -\int _{D}\left\langle Q^TQ\cdot \nabla {\widehat{u}}(x), Q^{-1}\nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\right\rangle \cdot \left\langle \nu _l({\widehat{u}}), V\right\rangle dx+\int _D F(\rho '(x))\cdot Q \cdot V dx. \end{aligned}$$ (3.20) Noting that \({\widetilde{Q}}^T={\widetilde{Q}}\) and $$\begin{aligned} \langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x),{\widetilde{Q}}^{-1}\cdot \nu _l({\widehat{u}})\rangle =0, \end{aligned}$$ we have $$\begin{aligned}&-\int _{D}\langle Q^TQ\cdot \nabla {\widehat{u}}(x), Q^{-1}\nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle \nu _l({\widehat{u}}), V\rangle dx\\&\quad =-\int _{D}\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), {\widetilde{Q}}\cdot Q^{-1}\nabla \big (\nu _l(\sigma '({\widehat{u}}))\big )\rangle \cdot \langle {\widetilde{Q}}^{-1}\cdot \nu _l({\widehat{u}}), {\widetilde{Q}}\cdot V\rangle dx\\&\quad =-\int _{D}\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega _2\cdot {\widetilde{Q}}\cdot V\rangle dx. \end{aligned}$$ Thus, (3.20) implies $$\begin{aligned}&-\int _D{\widetilde{Q}}\cdot \nabla {\widehat{u}}(x)\cdot \nabla ({\widetilde{Q}} \cdot V)dx \\&\quad =\int _{D^-}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \frac{1}{2}{\widetilde{Q}}^{-1}\cdot (Q^T\nabla Q-\nabla Q^TQ) \cdot {\widetilde{Q}}^{-1}\cdot {\widetilde{Q}}\cdot V\right\rangle dx\\&\qquad -\int _{D^-}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega _1\cdot {\widetilde{Q}} \cdot V\right\rangle dx \\&\qquad -\int _{D}\left\langle {\widetilde{Q}}\cdot \nabla {\widehat{u}}(x), \Omega _2\cdot {\widetilde{Q}}\cdot V\right\rangle dx+\int _D F(\rho '(x))\cdot Q \cdot V dx. \end{aligned}$$ This is (3.19). We proved the first result of the lemma. Step 4 If \(u\in W^{2,p}(D^+)\), according to the property of \(D\sigma \), it is easy to see \({\widehat{u}}\in W^{2,p}(D)\) since u satisfies free boundary condition. Computing directly, we have $$\begin{aligned} \nabla _{e_\alpha }{\widehat{u}}(x)&=D\sigma |_{u(\rho (x))}\circ Du|_{\rho (x)}\circ D\rho |_x(e_\alpha )\\&=D\sigma |_{u(\rho (x))}\circ D\Pi _N|_{u(\rho (x))} \circ Du|_{\rho (x)}\circ D\rho |_x(e_\alpha ), \quad x\in D^-, \end{aligned}$$ where \(\Pi _N:N_{\delta _0'}\rightarrow N\) is the nearest projection map for some \(\delta _0'-\)neighborhood of N in \({\mathbb {R}}^N\). By direct computing, we obtain $$\begin{aligned} \Delta {\widehat{u}}(x)&= D^2(\sigma \circ \Pi _N)|_{\sigma ({\widehat{u}})}(\nabla (u\circ \rho ), \nabla (u\circ \rho ))+D\sigma (\sigma ({\widehat{u}}))\cdot F(\rho (x))\\&= D^2(\sigma \circ \Pi _N)|_{\sigma ({\widehat{u}})}(P({\widehat{u}})\cdot \nabla {\widehat{u}}(x), P({\widehat{u}})\cdot \nabla {\widehat{u}}(x))+P(\sigma ({\widehat{u}}))\cdot F(\rho (x)). \end{aligned}$$ Combining this with the fact that \({\widehat{u}}\) satisfies Eq. (1.4) in \(D^+\), the Eq. (3.11) follows immediately by taking $$\begin{aligned} \Upsilon _{{\widehat{u}}}(\cdot ,\cdot )= {\left\{ \begin{array}{ll} A({\widehat{u}})(\cdot ,\cdot )\ in\ D^+,\\ D^2(\sigma \circ \Pi _N)|_{\sigma ({\widehat{u}})}(P({\widehat{u}})\cdot , P({\widehat{u}})\cdot )\ in\ D^-; \end{array}\right. } and \; {\widehat{F}}= {\left\{ \begin{array}{ll} F(x)\ in\ D^+,\\ P(\sigma ({\widehat{u}}))\cdot F(\rho (x))\ in\ D^-. \end{array}\right. } \end{aligned}$$ (3.21) \(\square \) Now, applying Theorem 2.4, we derive the following Theorem 3.4 Let \(F\in L^p(D_2^+)\) for some \(p>1\) and \(u\in W^{1,2}(D_2^+,N)\) be a weak solution of (1.4) with free boundary \(u(\partial ^0D_2^+)\) on K. Suppose \(\Vert \nabla u\Vert _{L^2(D_2^+)}+\Vert \tau (u)\Vert _{L^p(D_2^+)}\le \epsilon _3\), then \(u(x)\in W^{2,p}(D_{\frac{1}{2}}^+)\). Proof By Proposition 3.3, the extended \({\widehat{u}} \in W^{1,2}(D, {\mathbb {R}}^N)\) is a weak solution to a system (2.3) with A satisfying (2.4) and with \(\Omega \) satisfying \(|\Omega |\le C |\nabla {\widehat{u}}|\). Then we can apply Theorem 2.4 (taking \(\epsilon _3\) smaller if necessary) for \(1<p<2\) and bootstrap for \(p\ge 2\) to prove the theorem. \(\square \) Moreover, we have Theorem 3.5 Let M be a compact Riemann surface with smooth boundary \(\partial M\), N a compact Riemannian manifold, and \(K\subset N\) a smooth submanifold. Let \(F\in L^p(M)\) for some \(p>1\), and \(u\in H^1(M,N)\) be a weak solution of (1.4) with free boundary \(u(\partial M)\) on K, then \(u\in W^{2,p}(M)\). To end this section, we derive the removability of a local singularity at the free boundary (see Theorem 2.3 for the interior case). Theorem 3.6 Let \(u\in W^{2,p}_{loc}(D^+{\setminus }\{0\},N)\), \(p>1\) be a map with finite energy that satisfies $$\begin{aligned}&\tau (u)=g\in L^p(D^+,TN),\quad a.e.\;\ x\in D^+,\end{aligned}$$ (3.22) $$\begin{aligned}&u(x)\in K,\quad du(x)(\overrightarrow{n})\perp T_{u(x)}K, \quad a.e.\;\ x\in \partial ^0D^+, \end{aligned}$$ (3.23) then u can be extended to a map belonging to \(W^{2,p}(D^+,N)\). Proof Applying a similar argument as in Lemma A.2 in [13], it is easy to see that u is a weak solution of (1.4) with \(F=g\) and with free boundary \(u(\partial ^0D^+)\) on K. By Theorem 3.4, we know \(u\in W^{2,p}(D^+_r)\) for some small \(r>0\). Thus, \(u\in W^{2,p}(D^+)\). \(\square \) 4 Some basic analytic properties for the free boundary case In this section, we will prove some basic lemmas for the free boundary case, such as small energy regularity (near the boundary), gap theorem, Pohozaev identity. Firstly, we prove a small energy regularity lemma near the boundary. Lemma 4.1 Let \(u\in W^{2,p}(D_2^+,N)\), \(1<p\le 2\) be a map with tension field \(\tau (u)\in L^p(D_2^+)\) and with free boundary \(u(\partial ^0D_2^+)\) on K. There exists \(\epsilon _4=\epsilon _4(p,N)>0\), such that if \(\Vert \nabla u\Vert _{L^2(D_2^+)}+\Vert \tau (u)\Vert _{L^p(D_2^+)}\le \epsilon _4\), then $$\begin{aligned} \Vert u-\frac{1}{|D^+|}\int _{D^+}udx\Vert _{W^{2,p}(D_{1/2}^+)}\le C(p,N)(\Vert \nabla u\Vert _{L^p(D^+)}+\Vert \tau (u)\Vert _{L^p(D^+)}).\qquad \end{aligned}$$ (4.1) Moreover, by the Sobolev embedding \(W^{2,p}({\mathbb {R}}^2)\subset C^0({\mathbb {R}}^2)\), we have $$\begin{aligned} \Vert u\Vert _{Osc(D_{1/2}^+)}=\sup _{x,y\in D_{1/2}^+}|u(x)-u(y)|\le C(p,N)(\Vert \nabla u\Vert _{L^p(D^+)}+\Vert \tau (u)\Vert _{L^p(D^+)}).\nonumber \\ \end{aligned}$$ (4.2) Proof By Proposition 3.3, we can extend u to \({\widehat{u}}\in W^{2,p}(D)\) which is defined in D and satisfies $$\begin{aligned} \triangle {\widehat{u}}+\Upsilon _{{\widehat{u}}}(\nabla {\widehat{u}},\nabla {\widehat{u}})={\widehat{F}}\quad in \; D. \end{aligned}$$ (4.3) where \(F=\tau (u)\) in \(D^+\) and \(\Upsilon _{{\widehat{u}}}(\cdot ,\cdot )\), \({\widehat{F}}(x)\) are defined by (3.21). Firstly, we let \(1<p<2\). Take a cut-off function \(\eta \in C^\infty _0(D)\), such that \(0\le \eta \le 1\), \(\eta |_{D_{3/4}}\equiv 1\) and \(|\nabla \eta |\le C\). Then, we have $$\begin{aligned} \Delta (\eta {\widehat{u}})=\eta \Delta {\widehat{u}}+2\nabla \eta \nabla {\widehat{u}}+{\widehat{u}}\Delta \eta \le C(N)|\nabla {\widehat{u}}||\nabla (\eta {\widehat{u}})| +C(N)(|\nabla {\widehat{u}}|+|{\widehat{u}}|+|{\widehat{F}}|). \end{aligned}$$ Without loss of generality, we may assume \(\frac{1}{|D^+|}\int _{D^+}{\widehat{u}}dx=\frac{1}{|D^+|}\int _{D^+}udx=0\). By the standard elliptic estimates, Sobolev's embedding, Poincaré's inequality and Proposition 3.3, we have $$\begin{aligned} \Vert \eta {\widehat{u}}\Vert _{W^{2,p}(D)}&\le C(p,N)\Vert |\nabla {\widehat{u}}||\nabla (\eta {\widehat{u}})|\Vert _{L^p(D)} +C(p,N)(\Vert \nabla {\widehat{u}}\Vert _{L^p(D)}+\Vert {\widehat{u}}\Vert _{L^p(D)} +\Vert {\widehat{F}}\Vert _{L^p(D)})\\&\le C(p,N)\Vert \nabla {\widehat{u}}\Vert _{L^2(D)}\Vert \nabla (\eta {\widehat{u}})\Vert _{L^{\frac{2p}{2-p}}(D)} +C(p,N)(\Vert \nabla {\widehat{u}}\Vert _{L^p(D)} +\Vert \tau (u)\Vert _{L^p(D^+)})\\&\le C(p,N)\epsilon _4\Vert \eta {\widehat{u}}\Vert _{W^{2,p}(D)} +C(p,N)(\Vert \nabla u\Vert _{L^p(D^+)} +\Vert \tau (u)\Vert _{L^p(D^+)}), \end{aligned}$$ where we also used the fact that \(\Vert \nabla {\widehat{u}}\Vert _{L^p(D)}\le C(N)\Vert \nabla u\Vert _{L^p(D^+)}\), \(1< p\le 2\). Taking \(\epsilon _4\) sufficiently small, we have $$\begin{aligned} \Vert u\Vert _{W^{2,p}(D^+_{1/2})}\le \Vert \eta {\widehat{u}}\Vert _{W^{2,p}(D)}\le C(p,N)(\Vert \nabla u\Vert _{L^p(D^+)} +\Vert \tau (u)\Vert _{L^p(D^+)}). \end{aligned}$$ So, we have proved the lemma in the case \(1<p<2\). Next, if \(p=2\), one can first derive the above estimate with \(p=\frac{4}{3}\). Such an estimate gives a \(L^4(D^+_{3/4})-\) bound for \(\nabla u\). Then one can apply the \(W^{2,2}-\)boundary estimate to the equation and get the conclusion of lemma with \(p=2\). \(\square \) The gap theorem still holds for harmonic maps with free boundary. Lemma 4.2 There exists a constant \(\epsilon _5=\epsilon _5(M,N)>0\) such that if u is a smooth harmonic map from M to N with free boundary on K and satisfying $$\begin{aligned} \int _M|\nabla u|^2dvol\le \epsilon _5, \end{aligned}$$ then u is a constant map. Proof By Lemmas 2.1, 3.2, and 4.1, take any \(x_0\in M\), then we may assume the image of u is contained in a Fermi-coordinate chart \((B_{R_0}^N(u(x_0)),y^i)\) of N. Thus, we can rewrite the equation in the new coordinate as follows: $$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta _M u+\Gamma (u)(\nabla u,\nabla u)=0, \ in\ M;\\ \frac{\partial u^i(x)}{\partial \overrightarrow{n}}=0,\quad 1\le i\le k,\quad u^j(x)=0,\quad k+1\le j\le n,\quad x\in \partial M. \end{array}\right. } \end{aligned}$$ where \(\Gamma (u)(\nabla u,\nabla u)=g^{\alpha \beta }\Gamma ^i_{jk}(u)\frac{\partial u^j}{\partial x^\alpha }\frac{\partial u^k}{\partial x^\beta }\frac{\partial }{\partial y^i}\) and \(\Gamma ^i_{jk}\) are the Christoffel symbol of N in local coordinates \(\{y^i\}_{i=1}^n\). Without loss of generality, we may assume \(\int _{M}u^i=0\), \(1\le i\le k\). By standard elliptic estimates with Dirichlet boundary condition and Neumann boundary condition (see Lemma 2.6), we have $$\begin{aligned} \Vert \nabla u\Vert _{W^{1,4/3}(M)}&\le C(M)\Vert \Delta _M u\Vert _{L^{4/3}(M)}\\&\le C(M,N)\Vert \nabla u\Vert _{L^2(M)}\Vert \nabla u\Vert _{L^{4}(M)}\\&\le C(M,N)\sqrt{\epsilon _5}\Vert \nabla u\Vert _{L^{4}(M)}\le C(M,N)\sqrt{\epsilon _5}\Vert \nabla u\Vert _{W^{1,4/3}(M)}. \end{aligned}$$ If \(\epsilon _5\) is small, then u is a constant map. \(\square \) Next, we compute the Pohozaev identity which is similar to [24]. Lemma 4.3 For \(x_0\in \partial ^0D^+\), let \(u(x)\in W^{2,2}(D^+(x_0),N)\) be a map with tension field \(\tau (u)\in L^2(D^+(x_0))\) and with free boundary \(u(\partial ^0D^+)\) on K. Then, for any \(0<t<1\), there holds $$\begin{aligned} \int _{\partial ^+D_t^+\left( x_0\right) }r\left( \left| \frac{\partial u}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla u\right| ^2\right) =\int _{D_t^+\left( x_0\right) }r\frac{\partial u}{\partial r}\tau dx \end{aligned}$$ (4.4) where \((r,\theta )\in (0,1)\times (0,\pi )\) are the polar coordinates at \(x_0\). Proof Since u(x) satisfies the equation $$\begin{aligned} \tau =\Delta u+A(u)(\nabla u,\nabla u) \quad a.e.\ x\in D^+(x_0) \end{aligned}$$ with the free boundary \(u(\partial ^0D^+)\) on K, multiplying \((x-x_0)\nabla u\) to both sides of the above equation and integrating by parts, for any \(0<t<1\), we get $$\begin{aligned}&\int _{D_t^+(x_0)}\tau \cdot ((x-x_0)\nabla u)dx \\&\quad =\int _{D_t^+(x_0)}\Delta u\cdot ((x-x_0)\nabla u)dx \\&\quad =\int _{\partial (D_t^+(x_0))}\frac{\partial u}{\partial n}\cdot ((x-x_0)\nabla u)-\int _{D_t^+(x_0)}\nabla _{e_\alpha }u\cdot \nabla _{e_\alpha }((x-x_0)\nabla u)dx \\&\quad =\int _{\partial ^+ (D_t^+(x_0))}\frac{\partial u}{\partial n}\cdot ((x-x_0)\nabla u)-\int _{D_t^+(x_0)}|\nabla u|^2dx-\frac{1}{2}\int _{D_t^+(x_0)}(x-x_0)\cdot \nabla |\nabla u|^2dx \\&\quad =\int _{\partial ^+ (D_t^+(x_0))}\frac{\partial u}{\partial n}\cdot ((x-x_0)\nabla u)-\frac{1}{2}\int _{\partial (D_t^+(x_0))}\langle x-x_0,\overrightarrow{n}\rangle |\nabla u|^2 \\&\quad =\int _{\partial ^+ (D_t^+(x_0))}\frac{\partial u}{\partial n}\cdot ((x-x_0)\nabla u) -\frac{1}{2}\int _{\partial ^+ (D_t^+(x_0))}\langle x-x_0,\overrightarrow{n}\rangle |\nabla u|^2 \\&\quad =\int _{\partial ^+ (D_t^+(x_0))}r\left( \left| \frac{\partial u}{\partial r}\right| ^2-\frac{1}{2}|\nabla u|^2\right) , \end{aligned}$$ where the last second equality follows from the fact that \(\langle x-x_0,\overrightarrow{n}\rangle =0\) on \(\partial ^0D_t^+(x_0)\). Then the conclusion of lemma follows immediately. \(\square \) Corollary 4.4 Under the assumptions of Lemma 4.3, we have $$\begin{aligned} \int _{D_{2t}^+\left( x_0\right) {\setminus } D_{t}^+\left( x_0\right) }\left( \left| \frac{\partial u}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla u\right| ^2\right) dx\le t\Vert \nabla u\Vert _{L^2\left( D^+\left( x_0\right) \right) }\Vert \tau \Vert _{L^2\left( D^+\left( x_0\right) \right) }. \end{aligned}$$ Proof From Lemma 4.3, we have $$\begin{aligned} \int _{\partial ^+D_t^+\left( x_0\right) }\left( \left| \frac{\partial u}{\partial r}\right| ^2-\frac{1}{2}|\nabla u|^2\right) \!=\!\int _{D_t^+\left( x_0\right) }\frac{r}{t}\frac{\partial u}{\partial r}\tau dx\!\le \! \Vert \nabla u\Vert _{L^2\left( D^+\left( x_0\right) \right) }\Vert \tau \Vert _{L^2\left( D^+\left( x_0\right) \right) }. \end{aligned}$$ Integrating from t to 2t, we will get the conclusion of the corollary. \(\square \) 5 Energy identity and no neck property In this section, we shall prove our main Theorem 1.1. We first consider the following simpler case of a single boundary blow-up point. Theorem 5.1 Let \(u_n \in W^{2,2}(D_1^+(0),N)\) be a sequence of maps with tension fields \(\tau (u_n)\) and with free boundaries \(u_n(\partial ^0D^+)\) on K and satisfying (a) \(\ \Vert u_n\Vert _{W^{1,2}(D^+)}+\Vert \tau (u_n)\Vert _{L^{2}(D^+)}\le \Lambda ,\) (b) \(\ u_n\rightarrow u \text{ strongly } \text{ in } W_{loc}^{1,2}(D^+{\setminus }\{0\},{\mathbb {R}}^N)\ as\ n\rightarrow \infty ,\) (c) \(\ u_n(x)\in K,\quad du_n(x)(\overrightarrow{n})\perp T_{u_n(x)}K, \quad x\in \partial ^0 D^+\). Then there exist a subsequence of \(u_n\) (still denoted by \(u_n\)) and a nonnegative integer L such that, for any \(i=1,...,L\), there exist a point \(x^i_n\), positive numbers \(\lambda ^i_n\) and a nonconstant harmonic sphere \(w^i\) or a nonnegative constant \(a^i\) and a nonconstant harmonic disk \(w^i\) (which we view as a map from \({\mathbb {R}}_{a^i}^2\cup \{\infty \}\rightarrow N\)) with free boundary \(w^i(\partial {\mathbb {R}}_{a^i}^2)\) on K such that: (1) \(\ x^i_n\rightarrow 0,\ \lambda ^i_n\rightarrow 0\), as \(n\rightarrow \infty \); (2) \(\ \frac{dist(x^i_n,\partial ^0D^+)}{\lambda ^i_n}\rightarrow a^i\) or \(\frac{dist(x^i_n,\partial ^0D^+)}{\lambda ^i_n}\rightarrow \infty \ (i.e. \ a^i=\infty )\), as \(n\rightarrow \infty \); (3) \(\ \lim _{n\rightarrow \infty }\big (\frac{\lambda ^i_n}{\lambda ^j_n}+\frac{\lambda ^j_n}{\lambda ^i_n} +\frac{|x^i_n-x^j_n|}{\lambda ^i_n+\lambda ^j_n}\big )=\infty \) for any \(i\ne j\); (4) \(\ w^i\) is the weak limit of \(u_n(x^i_n+\lambda ^i_nx)\) in \(W^{1,2}_{loc}({\mathbb {R}}^2)\), if \(\frac{dist(x^i_n,\partial ^0D^+)}{\lambda ^i_n}\rightarrow \infty \) or \(w^i\) is the weak limit of \(u_n(x^i_n+\lambda ^i_nx)\) in \(W^{1,2}_{loc}({\mathbb {R}}_{a^i}^{2+})\), if \(\frac{dist(x^i_n,\partial ^0D^+)}{\lambda ^i_n}\rightarrow a^i\); (5) Energy identity: we have $$\begin{aligned} \lim _{n\rightarrow \infty }E(u_n,D^+)=E(u,D^+)+\sum _{i=1}^LE(w^i). \end{aligned}$$ (5.1) (6) No neck property: The image $$\begin{aligned} u(D^+)\cup \bigcup _{i=1}^Lw^i({\mathbb {R}}^2_{a^i}) \end{aligned}$$ (5.2) is a connected set, where \(w^i({\mathbb {R}}^2_{a^i})=w^i({\mathbb {R}}^2)\), if \(\frac{dist(x^i_n,\partial ^0D^+)}{\lambda ^i_n}\rightarrow \infty \). Proof of Theorem 5.1 Assume 0 is the only blow-up point of the sequence \(\{u_n\}\) in \(D^+\), i.e. $$\begin{aligned} \liminf _{n\rightarrow \infty }E(u_n;D^+_r)\ge \frac{\overline{\epsilon }^2}{8}\quad \text{ for } \text{ all } \;r>0 \end{aligned}$$ (5.3) where \(\overline{\epsilon }=\min \{\epsilon _1,\epsilon _3,\epsilon _4\}\). By the standard argument of blow-up analysis we can assume that, for any n, there exist sequences \(x_n\rightarrow 0\) and \(r_n\rightarrow 0\) such that $$\begin{aligned} E(u_n;D^+_{r_n}(x_n))=\sup _{\begin{array}{c} x\in D^+,r\le r_n\\ D^+_r(x)\subset D^+ \end{array}}E(u_n;D^+_r(x))=\frac{\overline{\epsilon }^2}{32}. \end{aligned}$$ (5.4) Denoting \(d_n=dist(x_n,\partial ^0D^+)\), we have the following two cases: Case 1 \(\limsup _{n\rightarrow \infty }\frac{d_n}{r_n}<\infty \). Set $$\begin{aligned} v_n(x):=u_n(x_n+r_nx) \end{aligned}$$ and $$\begin{aligned} B_n:=\{x\in {\mathbb {R}}^2|x_n+r_nx\in D^+\}. \end{aligned}$$ After taking a subsequence, we may assume \(\lim _{n\rightarrow \infty }\frac{d_n}{r_n}=a\ge 0\). Then $$\begin{aligned} B_n\rightarrow {\mathbb {R}}^2_a:=\{(x^1,x^2)|x^2\ge -a\}. \end{aligned}$$ It is easy to see that \(v_n(x)\) is defined in \(B_n\) and satisfies $$\begin{aligned}&\tau (v_n(x))=\Delta v_n(x)+A(v_n(x))(\nabla v_n(x),\nabla v_n(x))\quad x\in B_n; \end{aligned}$$ (5.5) $$\begin{aligned}&v_n(x)\in K,\quad dv_n(x)(\overrightarrow{n})\perp T_{v_n(x)}K, \text{ if } x_n+r_nx\in \partial ^0D^+, \end{aligned}$$ (5.6) where \(\tau (v_n(x))=r_n^2\tau (u_n(x_n+r_nx))\). Noting that for any \(x\in \partial ^0B_n:=\{x\in {\mathbb {R}}^2| \ x_n+r_nx\in \partial ^0D^+\}\) on the boundary, $$\begin{aligned} v_n(x)\in K,\quad dv_n(x)(\overrightarrow{n})\perp T_{v_n(x)}K, \end{aligned}$$ since \(\Vert \tau (v_n)\Vert _{L^2( B_n)}\le r_n\Vert \tau (u_n)\Vert _{L^2(D^+)}\le \frac{\overline{\epsilon }^2}{4}\) when n is big enough, by (5.4) and Lemma 4.1, we have $$\begin{aligned} \Vert v_n\Vert _{W^{2,2}(D_{4R}(0)\cap B_n)}\le C(R,N) \end{aligned}$$ (5.7) for any \(D_{4R}(0)\subset {\mathbb {R}}^2\). Then there exist a subsequence of \(v_n\) (also denoted by \(v_n\)) and a nontrivial harmonic map \({\widetilde{v}}^1\in W^{2,2}({\mathbb {R}}_a^2)\) with free boundary \({\widetilde{v}}^1(\partial {\mathbb {R}}_a^2)\) on K such that for any \(R>0\), there hold $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert v_n(x)-{\widetilde{v}}^1(x)\Vert _{W^{1,2}(D_R(0)\cap B_n\cap {\mathbb {R}}^2_a)}=0 \end{aligned}$$ (5.8) and $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert v_n(x)\Vert _{W^{1,2}(D_R(0)\cap B_n)}=\Vert {\widetilde{v}}^1(x)\Vert _{W^{1,2}(D_R(0)\cap {\mathbb {R}}^2_a)}. \end{aligned}$$ (5.9) In fact, by (5.7), we have $$\begin{aligned} \left\| v_n\left( x-\left( 0,\frac{d_n}{r_n}\right) \right) \right\| _{W^{2,2}(D^+_{3R}(0))}\le C(R,N) \end{aligned}$$ (5.10) when n is big enough. Then there exist a subsequence of \(v_n\) (also denoted by \(v_n\)) and a harmonic map \({\widetilde{v}}\in W^{2,2}(D^+_{3R}(0))\) such that $$\begin{aligned} \lim _{n\rightarrow \infty }\left\| v_n\left( x-\left( 0,\frac{d_n}{r_n}\right) \right) -{\widetilde{v}}(x)\right\| _{W^{1,2}(D^+_{3R}(0))}=0 \end{aligned}$$ and \(v_n\left( x-\left( 0,\frac{d_n}{r_n}\right) \right) \rightarrow {\widetilde{v}}(x)\), \(\frac{dv_n\left( x-\left( 0,\frac{d_n}{r_n}\right) \right) }{d\overrightarrow{n}}\rightarrow \frac{d{\widetilde{v}}}{d\overrightarrow{n}}(x)\), \(a.e.\ x\in \partial ^0D^+_{3R}(0)\) as \(n\rightarrow \infty \). Set $$\begin{aligned} {\widetilde{v}}^1(x):={\widetilde{v}}(x+(0,a)), \end{aligned}$$ then \({\widetilde{v}}^1\in W^{2,2}({\mathbb {R}}_a^2\cap D_{2R}(0))\) is a harmonic map with free boundary \({\widetilde{v}}^1(\partial {\mathbb {R}}_a^2\cap D_{2R}(0))\) on K such that $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert v_n(x)-{\widetilde{v}}^1(x)\Vert _{W^{1,2}(D_{2R}(0)\cap B_n\cap {\mathbb {R}}^2_a)}=0. \end{aligned}$$ Lastly, (5.9) follows from (5.7), (5.8), Sobolev embedding, Young's inequality and the fact that the measure of \(D_{2R}(0)\cap B_n{\setminus } {\mathbb {R}}^2_a\) goes to zero as \(n\rightarrow \infty \). In addition, \(E({\widetilde{v}}^1;D_1(0)\cap {\mathbb {R}}_a^2)=\frac{\overline{\epsilon }^2}{32}\). By the conformal invariance of harmonic maps and the removable singularity Theorem 3.6, \({\widetilde{v}}^1(x)\) can be extended to a nontrivial harmonic disk. Case 2 \(\limsup _{n\rightarrow \infty }\frac{d_n}{r_n}=\infty \). In this case, we can see that \(v_n(x)\) is defined in \(B_n\) which tends to \({\mathbb {R}}^2\) as \(n\rightarrow \infty \). Moreover, for any \(x\in {\mathbb {R}}^2\), when n is sufficiently large, by (5.4), we have $$\begin{aligned} E(v_n;D_1(x))\le \frac{\overline{\epsilon }^2}{32}. \end{aligned}$$ (5.11) According to Lemma 2.1, there exist a subsequence of \(v_n\) (we still denote it by \(v_n\)) and a harmonic map \(v^1(x)\in W^{1,2}({\mathbb {R}}^2,N)\) such that $$\begin{aligned} \lim _{n\rightarrow \infty }v_n(x)=v^1(x) \text{ in } W^{1,2}_{loc}({\mathbb {R}}^2). \end{aligned}$$ Besides, we know \(E(v^1;D_1(0))=\frac{\overline{\epsilon }^2}{32}\). By the standard theory of harmonic maps, \(v^1(x)\) can be extended to a nontrivial harmonic sphere. We call the above harmonic sphere \(v^1(x)\) or harmonic disk \({\widetilde{v}}^1(x)\) the first bubble. We will split the proof of Theorem 5.1 into two parts, energy identity and no neck result. Now, we begin to prove the energy identity. Energy identity: By the standard induction argument in [6], we only need to prove the theorem in the case where there is only one bubble. By Lemmas 2.1 and 4.1, there exist a subsequence of \(u_n\) (still denoted by \(u_n\)) and a weak limit \(u\in W^{2,2}(D^+)\) such that $$\begin{aligned} \lim _{\delta \rightarrow 0}\lim _{n\rightarrow \infty }E(u_n;D^+{\setminus } D^+_\delta (x_n))=E(u;D^+). \end{aligned}$$ So, in both cases, the energy identity is equivalent to $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{\delta \rightarrow 0}\lim _{n\rightarrow \infty }E(u_n;D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n))=0. \end{aligned}$$ (5.12) To prove the no neck property, i.e. that the sets \(u(D^+)\) and \(v({\mathbb {R}}^2\cup \infty )\) or \(v({\mathbb {R}}_a^2\cup \infty )\) are connected, it is enough to show that $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{\delta \rightarrow 0}\lim _{n\rightarrow \infty }\Vert u_n\Vert _{Osc\big (D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)\big )}=0. \end{aligned}$$ (5.13) Step 1 We prove the energy identity for Case 1, i.e., \(\lim _{n\rightarrow \infty }\frac{d_n}{r_n}=a<\infty \). Under the "one bubble" assumption, we first make the following: Claim: for any \(\epsilon >0\), there exist \(\delta >0\) and \(R>0\) such that $$\begin{aligned} \int _{D^+_{8t}(x_n){\setminus } D^+_{t}(x_n)}|\nabla u_n|^2dx\le \epsilon ^2 \text{ for } \text{ any } t\in \left( \frac{1}{2}r_nR,2\delta \right) \end{aligned}$$ (5.14) when n is large enough. In fact, if (5.14) is not true, then we can find \(t_n\rightarrow 0\), such that \(\lim _{n\rightarrow \infty }\frac{t_n}{r_n}=\infty \) and $$\begin{aligned} \int _{D^+_{8t_n}(x_n){\setminus } D^+_{t_n}(x_n)}|\nabla u_n|^2dx\ge \epsilon _6>0. \end{aligned}$$ (5.15) Then we have $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{d_n}{t_n}=0. \end{aligned}$$ We set $$\begin{aligned} w_n(x):=u_n(x_n+t_nx) \end{aligned}$$ and $$\begin{aligned} B'_n:=\{x\in {\mathbb {R}}^2|x_n+t_nx\in D^+\}. \end{aligned}$$ Then \(w_n(x)\) lives in \(B'_n\) which tends to \({\mathbb {R}}_+^2\) as \(n\rightarrow \infty \). It is easy to see that 0 is an energy concentration point for \(w_n\). We have to consider the following two cases: \(\mathbf (a) \) \(w_n\) has no other energy concentration points except 0. By Lemmas 2.1, 4.1 and the process of constructing the first bubble, passing to a subsequence, we may assume that \(w_n\) converges to a harmonic map \(w(x):{\mathbb {R}}^2_+\rightarrow N\) with free boundary \(w(\partial {\mathbb {R}}^2_+)\) on K satisfying, for any \(R>0\), $$\begin{aligned} \sup _{\lambda >0}\lim _{n\rightarrow \infty }\Vert w_n(x)-w(x)\Vert _{W^{1,2}\big ((D_R(0)\cap B'_n){\setminus } D_\lambda (0)\big )}=0. \end{aligned}$$ Noting that (5.15) implies $$\begin{aligned} \int _{(D_8{\setminus } D_1)\cap B'_n}|\nabla w|^2dx=\lim _{n\rightarrow \infty }\int _{(D_8{\setminus } D_1)\cap B'_n}|\nabla w_n|^2dx\ge \epsilon _6>0. \end{aligned}$$ (5.16) By the conformal invariance of harmonic map and Theorem 3.6, w(x) is a nontrivial harmonic disk which can be seen as the second bubble. This contradicts the "one bubble" assumption. \(\mathbf (b) \) \(w_n\) has another energy concentration point \(p\ne 0\). Without loss of generality, we may assume p is the only energy concentration point in \(D^+_{r_0}(p)\) for some \(r_0>0\). Similar to the process of constructing the first bubble, there exist \(x_n'\rightarrow p\) and \(r_n'\rightarrow 0\) such that $$\begin{aligned} E(w_n;D^+_{r'_n}(x'_n)\cap B_n')=\sup _{\begin{array}{c} x\in D_{r_0}^+(p),r\le r_n\\ D^+_r(x)\subset D_{r_0}^+(p) \end{array}}E(w_n;D^+_r(x)\cap B_n')=\frac{\overline{\epsilon }^2}{32}. \end{aligned}$$ (5.17) By (5.4), we know \(r_n't_n\ge r_n\). Then, passing to a subsequence we may assume \(\lim _{n\rightarrow \infty }\frac{d_n}{r'_nt_n}=d\in [0,a]\). Moreover, there exists a nontrivial harmonic map \({\widetilde{v}}^2(x):{\mathbb {R}}^2_d\rightarrow N\) with free boundary \({\widetilde{v}}^2(\partial {\mathbb {R}}^2_d)\) on K satisfying, for any \(R>0\), $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert w_n(x_n'+r_n'x)- {\widetilde{v}}^2(x)\Vert _{W^{1,2}(D_R(0)\cap B''_n)}=0. \end{aligned}$$ where \(B''_n:=\{x\in {\mathbb {R}}^2|x'_n+r'_nx\in B'_n\}\). That is $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert u_n(x_n+t_nx_n'+t_nr_n'x)- {\widetilde{v}}^2(x)\Vert _{W^{1,2}(D_R(0)\cap B''_n)}=0. \end{aligned}$$ (5.18) Therefore, \({\widetilde{v}}^2(x)\) is also a bubble for the sequence \(u_n\). This is also contradiction to the "one bubble" assumption. Thus, we proved Claim (5.14). Let \(x_n'\in \partial ^0 D^+\) be the projection of \(x_n\), i.e. \(d_n=dist(x_n,\partial ^0D^+)=|x_n-x_n'|\). Firstly, we decompose the neck domain \(D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)\) as follows $$\begin{aligned} D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)&=D^+_\delta (x_n){\setminus } D^+_{\frac{\delta }{2}}(x'_n)\cup D^+_{\frac{\delta }{2}}(x'_n){\setminus } D^+_{2r_nR}(x'_n)\cup D^+_{2r_nR}(x'_n){\setminus } D^+_{r_nR}(x_n)\\&:=\Omega _1\cup \Omega _2\cup \Omega _3, \end{aligned}$$ when n and R are large. Since \(\lim _{n\rightarrow \infty }\frac{d_n}{r_n}=a\), when n and R are large enough, it is easy to see that $$\begin{aligned} \Omega _1\subset D^+_\delta (x_n){\setminus } D^+_{\frac{\delta }{4}}(x_n)\quad and \quad \Omega _3\subset D^+_{4r_nR}(x_n){\setminus } D^+_{r_nR}(x_n). \end{aligned}$$ Moreover, for any \(2r_nR\le t\le \frac{1}{2}\delta \), there holds $$\begin{aligned} D^+_{2t}(x'_n){\setminus } D^+_{t}(x'_n)\subset D^+_{4t}(x_n){\setminus } D^+_{t/2}(x_n). \end{aligned}$$ By assumption (5.14), we have $$\begin{aligned} E(u_n;\Omega _1)+E(u_n;\Omega _3)\le \epsilon ^2 \end{aligned}$$ (5.19) and $$\begin{aligned} \int _{D^+_{2t}(x'_n){\setminus } D^+_{t}(x'_n)}|\nabla u_n|^2dx\le \epsilon ^2 \text{ for } \text{ any } \; t\in (2r_nR, \frac{1}{2}\delta ). \end{aligned}$$ (5.20) By a scaling argument, we may assume $$\begin{aligned} \Vert \nabla u_n\Vert _{L^2(D^+_{4t}(x'_n){\setminus } D^+_{t/2}(x'_n))}+\Vert \tau (u_n)\Vert _{L^p(D^+_{4t}(x'_n){\setminus } D^+_{t/2}(x'_n))}\le \overline{\epsilon }. \end{aligned}$$ According to the small energy regularity theory Lemmas 2.1 and 4.1, we obtain $$\begin{aligned} Osc_{D^+_{2t}(x'_n){\setminus } D^+_{t}(x'_n)}u_n\le C(\Vert \nabla u_n\Vert _{L^2(D^+_{4t}(x'_n){\setminus } D^+_{t/2}(x'_n))}+t\Vert \tau (u_n)\Vert _{L^2(D^+_{4t}(x'_n){\setminus } D^+_{t/2}(x'_n))}) \end{aligned}$$ (5.21) for any \(t\in (2r_nR, \frac{1}{2}\delta )\). Thus, \(u_n(\Omega _2)\subset K_{\delta _0}\) and we can extend the definition of \(u_n\) to the domain \(\widehat{\Omega }_2:= D_{\frac{\delta }{2}}(x'_n){\setminus } D_{2r_nR}(x'_n)\) by defining \({\widehat{u}}_n\) as (3.8). Then \({\widehat{u}}_n\in W^{2,2}(\widehat{\Omega }_2)\) and satisfies Eq. (3.11) where we take \(F_n(x)=\tau (u_n)(x)\) and define \(\Upsilon _{\widehat{u_n}}(\cdot ,\cdot )\), \(\widehat{F_n}(x)\) as in (3.21). Define $$\begin{aligned} {\widehat{u}}_n^*(r):=\frac{1}{2\pi r}\int _{\partial D_r(x_n')}{\widehat{u}}_n. \end{aligned}$$ Then by (5.21), we have $$\begin{aligned} \Vert {\widehat{u}}_n(x)-{\widehat{u}}_n^*(x)\Vert _{L^\infty (\widehat{\Omega }_2)}&\le \sup _{2r_nR\le t\le \frac{\delta }{2}} \Vert {\widehat{u}}_n(x)-{\widehat{u}}_n^*(x)\Vert _{L^\infty (D_{2t}(x_n'){\setminus } D_t(x_n'))}\\&\le C(1+\Vert D\sigma \Vert _{L^\infty })Osc_{D^+_{2t}(x_n'){\setminus } D^+_t(x_n')}u_n \le C(N)( \epsilon +\delta ). \end{aligned}$$ We have $$\begin{aligned} \int _{\widehat{\Omega }_2}\nabla {{\widehat{u}}_n}\nabla ({\widehat{u}}_n-{\widehat{u}}_n^*)dx=\int _{\partial \widehat{\Omega }_2}\frac{\partial {\widehat{u}}_n}{\partial n}({\widehat{u}}_n-{\widehat{u}}_n^*)-\int _{\widehat{\Omega }_2}\Delta {{\widehat{u}}_n}({\widehat{u}}_n-{\widehat{u}}_n^*)dx. \end{aligned}$$ On the one hand, by Jessen's inequality, we have $$\begin{aligned}&\int _{\widehat{\Omega }_2}\nabla {{\widehat{u}}_n}\nabla \left( {\widehat{u}}_n-{\widehat{u}}_n^*\right) dx\\&\quad =\int _{\widehat{\Omega }_2}\left| \nabla {{\widehat{u}}_n}\right| ^2dx-\int _{\widehat{\Omega }_2}\frac{\partial {\widehat{u}}_n}{\partial r}\frac{\partial {\widehat{u}}_n^*}{\partial r}dx\\&\quad \ge \int _{\widehat{\Omega }_2}\left| \nabla {{\widehat{u}}_n}\right| ^2dx-\left( \int _{\widehat{\Omega }_2}\left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2dx\right) ^{1/2}\left( \int _{\widehat{\Omega }_2}\left| \frac{1}{2\pi }\int _0^{2\pi } \frac{\partial {\widehat{u}}_n}{\partial r}\left( r,\theta \right) d\theta \right| ^2dx\right) ^{1/2}\\&\quad \ge \int _{\widehat{\Omega }_2}\left| \nabla {{\widehat{u}}_n}\right| ^2dx-\int _{\widehat{\Omega }_2}\left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2dx\\&\quad =\frac{1}{2}\int _{\widehat{\Omega }_2}\left| \nabla {{\widehat{u}}_n}\right| ^2dx-\int _{\widehat{\Omega }_2}\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {{\widehat{u}}_n}\right| ^2\right) dx. \end{aligned}$$ On the other hand, using Eq. (3.11), we get $$\begin{aligned} -\int _{\widehat{\Omega }_2}\Delta {{\widehat{u}}_n}({\widehat{u}}_n-{\widehat{u}}_n^*)dx&\le \int _{\widehat{\Omega }_2}|\Upsilon _{{\widehat{u}}_n}(\nabla {\widehat{u}}_n,\nabla {\widehat{u}}_n) +\widehat{F_n}||{\widehat{u}}_n-{\widehat{u}}_n^*|dx\\&\le C( \epsilon +\delta )\int _{\widehat{\Omega }_2}|\nabla {\widehat{u}}_n|^2dx+ C( \epsilon +\delta )\int _{\widehat{\Omega }_2}|\widehat{F_n}|dx\\&\le C( \epsilon +\delta )\int _{\widehat{\Omega }_2}|\nabla {\widehat{u}}_n|^2dx+ C( \epsilon +\delta )\Vert \tau _n\Vert _{L^2(\Omega _2)}. \end{aligned}$$ Thus, $$\begin{aligned}&\left( \frac{1}{2}-C\left( \epsilon +\delta \right) \right) \int _{\widehat{\Omega }_2}|\nabla {\widehat{u}}_n|^2dx\nonumber \\&\quad \le \int _{\partial \left( \widehat{\Omega }_2\right) }\frac{\partial {\widehat{u}}_n}{\partial n}\left( {\widehat{u}}_n-{\widehat{u}}_n^*\right) +\int _{\widehat{\Omega }_2}\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}|\nabla {{\widehat{u}}_n}|^2\right) dx+C\left( \epsilon +\delta \right) . \end{aligned}$$ (5.22) By the definition of \({\widehat{u}}_n\) (see (3.8)), we obtain $$\begin{aligned}&\int _{\widehat{\Omega }_2}\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {{\widehat{u}}_n}\right| ^2\right) dx\\&\quad = \int _{\Omega _2}\left( \left| \frac{\partial u_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {u_n}\right| ^2\right) dx\\&\quad \quad +\int _{\widehat{\Omega }_2{\setminus } \Omega _2}\left( \left| D\sigma \cdot \frac{\partial u_n\left( \rho \left( x\right) \right) }{\partial r}\right| ^2-\frac{1}{2}\left| D\sigma \cdot \nabla {u_n\left( \rho \left( x\right) \right) }\right| ^2\right) dx\\&\quad = \int _{\Omega _2}\left( \left| \frac{\partial u_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {u_n}\right| ^2\right) dx\\&\quad \quad +\int _{\Omega _2}\left( \left| D\sigma \cdot \frac{\partial u_n\left( x\right) }{\partial r}\right| ^2-\frac{1}{2}\left| D\sigma \cdot \nabla {u_n\left( x\right) }\right| ^2\right) dx. \end{aligned}$$ Note that $$\begin{aligned} \left| D\sigma \cdot \frac{\partial u_n\left( x\right) }{\partial r}\right| ^2&=\left\langle P\left( u_n\left( x\right) \right) \cdot \frac{\partial u_n\left( x\right) }{\partial r},P\left( u_n\left( x\right) \right) \cdot \frac{\partial u_n\left( x\right) }{\partial r}\right\rangle \\&=\left\langle P^TP\cdot \frac{\partial u_n\left( x\right) }{\partial r},\frac{\partial u_n\left( x\right) }{\partial r}\right\rangle \\&=\left\langle \left( P^TP-Id\right) \frac{\partial u_n\left( x\right) }{\partial r},\frac{\partial u_n\left( x\right) }{\partial r}\right\rangle +\left| \frac{\partial u_n\left( x\right) }{\partial r}\right| ^2, \end{aligned}$$ where P is the matrix corresponding to the linear operator defined by (3.9) under the orthonormal basis of \({\mathbb {R}}^N\). Similarly, $$\begin{aligned} |D\sigma \cdot \nabla u_n(x)|^2= \langle \big (P^TP-Id\big )\nabla u_n(x),\nabla u_n(x)\rangle +|\nabla u_n(x)|^2. \end{aligned}$$ Noting that \(\Xi |_{K}=Id\), by the continuity of eigenvalues of \(P^TP\), we have that for any \(\delta '>0\), there exists a constant \(\delta _1=\delta _1(\delta ')>0\), such that for any \(\xi \in {\mathbb {R}}^n\) and \(y\in K_{\delta _1}\), there holds $$\begin{aligned} \langle P^T(y)P(y)\xi ,\xi \rangle \le (1+\delta ')|\xi |^2. \end{aligned}$$ By (5.21), we have \(\Vert dist( u_n,K)\Vert _{L^\infty (\Omega _2)}\le C(\epsilon +\delta )\). Thus, for any \(\delta '>0\), \(\xi \in {\mathbb {R}}^n\), there holds $$\begin{aligned} \langle (P^T(u_n(x))P(u_n(x))-Id)\xi ,\xi \rangle \le \delta '|\xi |^2 \end{aligned}$$ when \(\epsilon \) and \(\delta \) are small enough. Thus, $$\begin{aligned}&\int _{\widehat{\Omega }_2}\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {{\widehat{u}}_n}\right| ^2\right) dx\nonumber \\&\quad \le 2\int _{\Omega _2}\left( \left| \frac{\partial u_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {u_n}\right| ^2\right) dx+C\delta '\int _{\Omega _2}\left| \nabla {u_n\left( x\right) }\right| ^2dx\nonumber \\&\quad = 2\sum _{i=1}^{m_n}\int _{D^+_{2^{i}\left( 2r_nR\right) }\left( x'_n\right) {\setminus } D^+_{2^{i-1}\left( 2r_nR\right) }\left( x'_n\right) }\left( \left| \frac{\partial u_n}{\partial r}\right| ^2-\frac{1}{2}\left| \nabla {u_n}\right| ^2\right) dx+C\delta '\int _{\Omega _2}\left| \nabla {u_n\left( x\right) }\right| ^2dx\nonumber \\&\quad \le C\sum _{i=1}^{m_n}2^{i}r_nR+C\delta '\int _{\Omega _2}\left| \nabla {u_n\left( x\right) }\right| ^2dx\le C\delta +C\delta '\int _{\Omega _2}\left| \nabla {u_n\left( x\right) }\right| ^2dx, \end{aligned}$$ (5.23) where the last second inequality follows from Corollary 4.4. Combining inequality (5.22) with (5.23), we have $$\begin{aligned}&\left( \frac{1}{2}-C(\epsilon +\delta '+\delta )\right) \int _{\widehat{\Omega }_2}|\nabla {\widehat{u}}_n|^2dx\le \int _{\partial \widehat{\Omega }_2}\frac{\partial {\widehat{u}}_n}{\partial n}({\widehat{u}}_n-{\widehat{u}}_n^*)+C(\epsilon +\delta ). \end{aligned}$$ (5.24) As for the boundary term, by trace theory, we have $$\begin{aligned} \int _{\partial D_{\frac{1}{2}\delta }(x_n')}\frac{\partial {\widehat{u}}_n}{\partial n}({\widehat{u}}_n-{\widehat{u}}_n^*)&\le C(\epsilon +\delta )\int _{\partial ^+ D_{\frac{1}{2}\delta }(x_n')}|\nabla u_n|\\&\le C(\epsilon +\delta )\left( \Vert \nabla u_n\Vert _{L^2(D^+_{\frac{1}{2}\delta }(x_n'){\setminus } D^+_{\frac{1}{4}\delta }(x_n') )}+\delta \Vert \nabla ^2 u_n\Vert _{L^2(D^+_{\frac{1}{2}\delta }(x_n'){\setminus } D^+_{\frac{1}{4}\delta }(x_n') )}\right) \\&\le C(\epsilon +\delta )\left( \Vert \nabla u_n\Vert _{L^2(D^+_{\delta }(x_n'){\setminus } D^+_{\frac{1}{8}\delta }(x_n') )}+\delta \Vert \tau _n\Vert _{L^2(D^+_{\delta }(x_n'){\setminus } D^+_{\frac{1}{8}\delta }(x_n') )}\right) \\&\le C(\epsilon +\delta ), \end{aligned}$$ where the last second inequality can be derived from Lemmas 2.1 and 4.1. Also, there holds $$\begin{aligned} \int _{\partial D_{2r_nR}(x_n')}\frac{\partial {\widehat{u}}_n}{\partial n}({\widehat{u}}_n-{\widehat{u}}_n^*) \le C(\epsilon +\delta ). \end{aligned}$$ Therefore, combining these results and taking \(\epsilon \) and \(\delta \) in (5.24) sufficiently small (then \(\delta '\) is small), we have $$\begin{aligned} \int _{\Omega _2}|\nabla u_n|^2dx\le \int _{\widehat{\Omega }_2}|\nabla {\widehat{u}}_n|^2dx\le C(\delta +\epsilon ). \end{aligned}$$ (5.25) Then the equality (5.12) follows from (5.19) and (5.25). We proved the energy identity for the Case 1. Step 2 We prove the energy identity for Case 2, i.e., \(\limsup _{n\rightarrow \infty }\frac{d_n}{r_n}=\infty \). The proof is similar to the Case 1. Firstly, we need to show the Claim (5.14) also holds in this case. In fact, if (5.14) is not true, then we can find \(t_n\rightarrow 0\), such that \(\lim _{n\rightarrow \infty }\frac{t_n}{r_n}=\infty \) and $$\begin{aligned} \int _{D^+_{8t_n}(x_n){\setminus } D^+_{t_n}(x_n)}|\nabla u_n|^2dx\ge \epsilon _6>0. \end{aligned}$$ (5.26) Then passing to a subsequence, we may assume $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{d_n}{t_n}=b\in [0,\infty ]. \end{aligned}$$ We set $$\begin{aligned} w_n(x):=u_n(x_n+t_nx) \end{aligned}$$ and $$\begin{aligned} B'_n:=\{x\in {\mathbb {R}}^2|x_n+t_nx\in D^+\}. \end{aligned}$$ Then \(w_n(x)\) lives in \(B'_n\) and 0 is an energy concentration point for \(w_n\). We have to consider the following two cases: \(\mathbf (c) \) \(b<\infty \). Then \(B'_n\) tends to \({\mathbb {R}}^2_b\) as \(n\rightarrow \infty \). Here, we also need to consider two cases. \(\mathbf (i) \) \(w_n\) has no other energy concentration points except 0. It is almost the same as \(\textit{Case (a)}\) in Step 1 where by passing to a subsequence, \(w_n\) converges to a nontrivial harmonic map \(w(x):{\mathbb {R}}^2_b\rightarrow N\) with free boundary \(w(\partial {\mathbb {R}}^2_b)\) on K which can be seen as the second bubble. This is a contradiction to the "one bubble" assumption. \(\mathbf (ii) \) \(w_n\) has another energy concentration point \(p\ne 0\). Similar to the process of \(\textit{Case (b)}\) in Step 1, there exist \(x_n'\rightarrow p\) and \(r_n'\rightarrow 0\) such that (5.17) holds. Then, passing to a subsequence, we may assume $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{d_n}{r'_nt_n}=d\in [0,\infty ]. \end{aligned}$$ Moreover, if \(d\in [0,\infty )\), then there exists a nontrivial harmonic map \({\widetilde{v}}^2(x):{\mathbb {R}}^2_d\rightarrow N\) with free boundary \({\widetilde{v}}^2(\partial {\mathbb {R}}^2_d)\) on K satisfying (5.18) as in \(\textit{Case (b)}\). If \(d=\infty \), by the process of constructing the first bubble in Case 2, there exists \(v^2(x):{\mathbb {R}}^2\rightarrow N\) is a nontrivial harmonic map such that $$\begin{aligned} w_n(x_n'+r_n'x)\rightarrow v^2(x)\ in\ W^{1,2}_{loc}({\mathbb {R}}^2), \end{aligned}$$ that is $$\begin{aligned} u_n(x_n+t_nx_n'+t_nr_n'x)\rightarrow v^2(x)\ in\ W^{1,2}_{loc}({\mathbb {R}}^2). \end{aligned}$$ In both cases, we will get the second bubble \(v^2(x)\) or \({\widetilde{v}}^2(x)\). This contradicts the "one bubble" assumption. \(\mathbf (d) \) \(b=\infty \). Then \(B'_n\) tends to \({\mathbb {R}}^2\) as \(n\rightarrow \infty \). Again, we need to consider two cases. \(\mathbf (iii) \) \(w_n\) has no other energy concentration points except 0. By Lemma 2.1, Theorem 2.3 and (5.26), there exists \(v^2(x):{\mathbb {R}}^2\rightarrow N\) is a nontrivial harmonic map such that $$\begin{aligned} w_n(x)\rightarrow v^2(x)\ in\ W^{1,2}_{loc}({\mathbb {R}}^2{\setminus } \{0\}). \end{aligned}$$ Then, we get the second bubble \(v^2(x)\) which contradicts the "one bubble" assumption. \(\mathbf (iv) \) \(w_n\) has another energy concentration point \(p\ne 0\). Similar to \(\textit{Case (b)}\) in Step 1, there exist \(x_n'\rightarrow p\) and \(r_n'\rightarrow 0\) such that (5.17) holds and passing to a subsequence, we have $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{d_n}{r'_nt_n}=\infty . \end{aligned}$$ Moreover, by the process of constructing the first bubble in Case 2, there exists a nontrivial harmonic map \(v^2(x):{\mathbb {R}}^2\rightarrow N\) such that $$\begin{aligned} w_n(x_n'+r_n'x)\rightarrow v^2(x)\ in\ W^{1,2}_{loc}({\mathbb {R}}^2), \end{aligned}$$ that is $$\begin{aligned} u_n(x_n+t_nx_n'+t_nr_n'x)\rightarrow v^2(x)\ in\ W^{1,2}_{loc}({\mathbb {R}}^2). \end{aligned}$$ This is also a contradiction to the "one bubble" assumption. Thus, we proved our Claim (5.14). Secondly, we decompose the neck domain \(D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)\) as follows $$\begin{aligned} D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)&=D^+_\delta (x_n){\setminus } D^+_{\frac{\delta }{2}}(x'_n)\cup D^+_{\frac{\delta }{2}}(x'_n){\setminus } D^+_{2d_n}(x'_n)\\&\quad \cup D^+_{2d_n}(x'_n){\setminus } D^+_{d_n}(x_n)\cup D^+_{d_n}(x_n){\setminus } D^+_{r_nR}(x_n)\\&:=\Omega _1\cup \Omega _2\cup \Omega _3\cup \Omega _4, \end{aligned}$$ when n is large. Since \(\lim _{n\rightarrow \infty }d_n=0\) and \(\lim _{n\rightarrow \infty }\frac{d_n}{r_n}=\infty \), when n is large enough, it is easy to see that $$\begin{aligned} \Omega _1\subset D^+_\delta (x_n){\setminus } D^+_{\frac{\delta }{4}}(x_n),\quad and \;\Omega _3\subset D^+_{4d_n}(x_n){\setminus } D^+_{d_n}(x_n). \end{aligned}$$ Moreover, for any \(2d_n\le t\le \frac{1}{2}\delta \), there holds $$\begin{aligned} D^+_{2t}(x'_n){\setminus } D^+_{t}(x'_n)\subset D^+_{4t}(x_n){\setminus } D^+_{t/2}(x_n). \end{aligned}$$ By assumption (5.14), we have $$\begin{aligned} E(u_n;\Omega _1)+E(u_n;\Omega _3)\le \epsilon ^2 \end{aligned}$$ (5.27) and $$\begin{aligned} \int _{D^+_{2t}(x'_n){\setminus } D^+_{t}(x'_n)}|\nabla u_n|^2dx\le \epsilon ^2 \text{ for } \text{ any } \; t\in \left( 2d_n, \frac{1}{2}\delta \right) . \end{aligned}$$ (5.28) Noting that \(\Omega _4=D^+_{d_n}(x_n){\setminus } D^+_{r_nR}(x_n)=D_{d_n}(x_n){\setminus } D_{r_nR}(x_n)\), by the well-known blow-up analysis theory of harmonic maps with interior blow-up points (also a sequence of maps with uniformly \(L^p\) bounded tension fields for some \(p\ge \frac{6}{5}\)), there holds $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{n\rightarrow 0}E(u_n;D_{d_n}(x_n){\setminus } D_{r_nR}(x_n))=0. \end{aligned}$$ (5.29) and $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{n\rightarrow 0}Osc(u_n)_{D_{d_n}(x_n){\setminus } D_{r_nR}(x_n)}=0. \end{aligned}$$ (5.30) See [6, 20, 32] for details. Lastly, to estimate the energy concentration in \(\Omega _2\), we can use the same argument as in the previous Case 1 to get $$\begin{aligned} \int _{\Omega _2}|\nabla u_n|^2dx\le C(\delta +\epsilon ). \end{aligned}$$ (5.31) Combining (5.27), (5.29) with (5.31), it is easy to obtain (5.12). We proved the energy identity. Next, we prove the no neck property in Theorem 5.1, i.e., the base map and the bubbles are connected in the target manifold. No neck property: Here, we also need to consider two cases. But, for Case 2, we use the same argument as in the previous reasoning where we split the neck domain into two parts, an interior domain and a boundary domain. Then, with the help of the no neck results in [20, 32] for a sequence of maps with uniformly \(L^2\)-bounded tension fields, we just need to prove (5.13) for Case 1. We may assume \(\lim _{n\rightarrow \infty }\frac{d_n}{r_n}=a\) and decompose the neck domain \(D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)=\Omega _1\cup \Omega _2\cup \Omega _3\), when n and R are large. By assumption (5.14) and small energy regularity (Lemmas 2.1 and 4.1), we have $$\begin{aligned} \Vert u_n\Vert _{Osc\left( D^+_{\delta }\left( x_n\right) {\setminus } D^+_{\frac{\delta }{4}}\left( x'_n\right) \right) }&\le \Vert u_n\Vert _{Osc\left( D^+_{\delta }\left( x_n\right) {\setminus } D^+_{\frac{\delta }{5}}\left( x_n\right) \right) }\nonumber \\&\le C\left( \Vert \nabla u_n\Vert _{L^2\left( D^+_{\frac{4\delta }{3}}\left( x_n\right) {\setminus } D^+_{\frac{\delta }{6}}\left( x_n\right) \right) }+\delta \Vert \tau _n\Vert _{L^2\left( D^+_{\frac{4\delta }{3}}\left( x_n\right) {\setminus } D^+_{\frac{\delta }{6}}\left( x_n\right) \right) }\right) \nonumber \\&\le C\left( \epsilon +\delta \right) \end{aligned}$$ (5.32) and $$\begin{aligned}&\Vert u_n\Vert _{Osc\left( D^+_{4r_nR}\left( x'_n\right) {\setminus } D^+_{r_nR}\left( x_n\right) \right) }\nonumber \\&\quad \le \Vert u_n\Vert _{Osc\left( D^+_{5r_nR}\left( x_n\right) {\setminus } D^+_{r_nR}\left( x_n\right) \right) } \nonumber \\&\quad \le C\left( \Vert \nabla u_n\Vert _{L^2\left( D^+_{6r_nR}\left( x_n\right) {\setminus } D^+_{\frac{3r_nR}{4}}\left( x_n\right) \right) }+r_nR\Vert \tau _n\Vert _{L^2\left( D^+_{6r_nR}\left( x_n\right) {\setminus } D^+_{\frac{3r_nR}{4}}\left( x_n\right) \right) }\right) \nonumber \\&\quad \le C\left( \epsilon +\delta \right) , \end{aligned}$$ (5.33) when n, R are large and \(\delta \) is small. Without loss of generality, we may assume \(\frac{1}{2}\delta =2^{m_n}(2r_nR)\) where \(m_n\rightarrow \infty \) as \(n\rightarrow \infty \). Inspired by a technique by Ding [5] for the interior bubbling case, we set \(Q(t):=D^+_{2^{t_0+t}2r_nR}(x_n'){\setminus } D^+_{2^{t_0-t}2r_nR}(x_n')\), \({\widehat{Q}}(t):=D_{2^{t_0+t}2r_nR}(x_n'){\setminus } D_{2^{t_0-t}2r_nR}(x_n')\) and define $$\begin{aligned} f(t):=\int _{Q(t)}|\nabla u_n|^2dx, \end{aligned}$$ where \(0\le t_0\le m_n\) and \(0\le t\le \min \{t_0,m_n-t_0\}\). Similar to the proof of (5.22) and (5.23), we have $$\begin{aligned}&\left( \frac{1}{2}-C\left( \epsilon +\delta \right) \right) \int _{{\widehat{Q}}\left( t\right) }|\nabla {\widehat{u}}_n|^2dx\nonumber \\&\quad \le \int _{\partial \left( {\widehat{Q}}\left( t\right) \right) }\frac{\partial {\widehat{u}}_n}{\partial n}\left( {\widehat{u}}_n-{\widehat{u}}_n^*\right) +\int _{{\widehat{Q}}\left( t\right) }\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}|\nabla {{\widehat{u}}_n}|^2\right) dx+C\left( \epsilon +\delta \right) \int _{Q\left( t\right) }|\tau _n|dx \end{aligned}$$ (5.34) and $$\begin{aligned} \int _{{\widehat{Q}}\left( t\right) }\left( \left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2-\frac{1}{2}|\nabla {{\widehat{u}}_n}|^2\right) dx&\le 2\int _{Q\left( t\right) }\left( \left| \frac{\partial u_n}{\partial r}\right| ^2-\frac{1}{2}|\nabla {u_n}|^2\right) dx+C\delta ' \int _{Q\left( t\right) }|\nabla {u_n\left( x\right) }|^2dx\nonumber \\&\le C2^{t_0+t}r_nR+C\delta '\int _{Q\left( t\right) }|\nabla {u_n\left( x\right) }|^2dx \end{aligned}$$ (5.35) where the last inequality follows from Corollary 4.4. As for the boundary, by Poincaré's inequality, we have $$\begin{aligned}&\int _{\partial \left( D_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\frac{\partial {\widehat{u}}_n}{\partial n}\left( {\widehat{u}}_n-{\widehat{u}}_n^*\right) \\&\quad \le \left( \int _{\partial \left( D_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2\right) ^{\frac{1}{2}}\left( \int _{\partial \left( D_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| {\widehat{u}}_n-{\widehat{u}}_n^*\right| ^2\right) ^{\frac{1}{2}}\\&\quad \le C\left( \int _{\partial \left( D_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| \frac{\partial {\widehat{u}}_n}{\partial r}\right| ^2\right) ^{\frac{1}{2}}\left( 2^{t_0+t}2r_nR\int _0^{2\pi }\left| \frac{\partial {\widehat{u}}_n}{\partial \theta }\right| ^2\right) ^{\frac{1}{2}}\\&\quad \le C2^{t_0+t}2r_nR\int _{\partial \left( D_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| \nabla {\widehat{u}}_n\right| ^2\\&\quad \le C2^{t_0+t}2r_nR\int _{\partial ^+ \left( D^+_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| \nabla u_n\right| ^2. \end{aligned}$$ Similarly, we get $$\begin{aligned} \int _{\partial \left( D_{2^{t_0-t}2r_nR}\left( x_n'\right) \right) }\frac{\partial {\widehat{u}}_n}{\partial n}\left( {\widehat{u}}_n-{\widehat{u}}_n^*\right) \le C2^{t_0-t}2r_nR\int _{\partial ^+ \left( D^+_{2^{t_0-t}2r_nR}\left( x_n'\right) \right) }\left| \nabla u_n\right| ^2. \end{aligned}$$ Using these together, we have $$\begin{aligned}&\left( \frac{1}{2}-C\left( \epsilon +\delta '+\delta \right) \right) \int _{{\widehat{Q}}\left( t\right) }\left| \nabla {\widehat{u}}_n\right| ^2dx \\&\quad \le C2^{t_0+t}2r_nR\int _{\partial ^+ \left( D^+_{2^{t_0+t}2r_nR}\left( x_n'\right) \right) }\left| \nabla u_n\right| ^2+C2^{t_0-t}2r_nR\int _{\partial ^+ \left( D^+_{2^{t_0-t}2r_nR}\left( x_n'\right) \right) }\left| \nabla u_n\right| ^2 \\&\qquad +C2^{t_0+t}r_nR+C\left( \epsilon +\delta \right) \int _{Q\left( t\right) }\left| \tau _n\right| dx. \end{aligned}$$ Taking \(\epsilon \) and \(\delta \) sufficiently small, we get $$\begin{aligned}&\int _{Q(t)}|\nabla u_n|^2dx\le C2^{t_0+t}2r_nR\int _{\partial ^+ (D^+_{2^{t_0+t}2r_nR}(x_n'))}|\nabla u_n|^2\\&\quad +C2^{t_0-t}2r_nR\int _{\partial ^+ (D^+_{2^{t_0-t}2r_nR}(x_n'))}|\nabla u_n|^2\\&\quad +C2^{t_0+t}r_nR. \end{aligned}$$ Therefore, $$\begin{aligned} f(t)\le \frac{C}{\log 2}f'(t)+C2^{t_0+t}r_nR. \end{aligned}$$ (5.36) Thus, $$\begin{aligned} \left( 2^{-\frac{1}{C}t}f(t)\right) '\ge -C2^{t_0+(1-1/C)t}r_nR. \end{aligned}$$ Integrating from 2 to L, we arrive at $$\begin{aligned} f(2)&\le C2^{-\frac{1}{C}L}f(L)+C2^{t_0}r_nR\int _2^L2^{(1-1/C)t}dt\le C2^{-\frac{1}{C}L}f(L)\\&\quad +C2^{t_0}r_nR2^{(1-1/C)L}. \end{aligned}$$ Now, let \(t_0=i\) and \(L=L_i:=\min \{i,m_n-i\}\). Then, we have \(Q(L_i)\subset D^+_{\delta /2}(x'_n){\setminus } D^+_{2r_nR}(x'_n)\subset D^+_\delta (x_n){\setminus } D^+_{r_nR}(x_n)\) and $$\begin{aligned}&\int _{D^+_{2^{i+2}2r_nR}\left( x_n'\right) {\setminus } D^+_{2^{i-2}2r_nR}\left( x_n'\right) }|\nabla u_n|^2dx\\&\quad \le CE\left( u_n,D^+_\delta \left( x_n\right) {\setminus } D^+_{r_nR}\left( x_n\right) \right) 2^{-\frac{1}{C}L_i}+C2^{i}r_nR2^{\left( 1-1/C\right) L_i}\\&\quad \le CE\left( u_n,D^+_\delta \left( x_n\right) {\setminus } D^+_{r_nR}\left( x_n\right) \right) 2^{-\frac{1}{C}L_i}+C2^{i}r_nR2^{\left( 1-1/C\right) \left( m_n-i\right) }\\&\quad \le CE\left( u_n,D^+_\delta \left( x_n\right) {\setminus } D^+_{r_nR}\left( x_n\right) \right) 2^{-\frac{1}{C}L_i}+C\delta 2^{\left( -1/C\right) \left( m_n-i\right) }\\&\quad \le C\epsilon 2^{-\frac{1}{C}L_i}+C\delta 2^{\left( -1/C\right) \left( m_n-i\right) }, \end{aligned}$$ where we used the energy identity (5.12). By Lemmas 2.1 and 4.1, we obtain $$\begin{aligned}&Osc_{D^+_{2^{i+1}2r_nR}(x_n'){\setminus } D^+_{2^{i-1}2r_nR}(x_n')}u_n\\&\quad \le C\left( \Vert \nabla u_n\Vert _{L^2(D^+_{2^{i+2}2r_nR}(x_n'){\setminus } D^+_{2^{i-2}2r_nR}(x_n'))}+(2^{i+2}2r_nR)\Vert \tau _n\Vert _{L^2(D^+_{2^{i+2}2r_nR}(x_n'){\setminus } D^+_{2^{i-2}2r_nR}(x_n'))}\right) \\&\quad \le C\left( \Vert \nabla u_n\Vert _{L^2(D^+_{2^{i+2}2r_nR}(x_n'){\setminus } D^+_{2^{i-2}2r_nR}(x_n'))}+2^{i}r_nR\right) . \end{aligned}$$ Summing over i from 2 to \(m_n-2\), we have $$\begin{aligned} \Vert u_n\Vert _{Osc(D^+_{\delta /4}(x'_n){\setminus } D^+_{4r_nR}(x'_n))}&\le \sum _{i=2}^{m_n-2} \Vert u_n\Vert _{Osc(D^+_{2^{i+1}2r_nR}(x_n'){\setminus } D^+_{2^{i-1}2r_nR}(x_n'))}\\&\le C\sum _{i=2}^{m_n-2}\left( \epsilon 2^{-\frac{1}{C}L_i}+\delta 2^{(-1/C)(m_n-i)}+2^{i}r_nR\right) \\&\le C\sum _{i=2}^{m_n-2}2^{-\frac{1}{C}i}(\epsilon +\delta )+C\delta \le C(\epsilon +\delta ). \end{aligned}$$ This inequality and (5.32), (5.33) imply (5.13) and we have proved there is no neck during the blow-up process. \(\square \) Now, we can prove Theorem 1.1. Proof of Theorem 1.1 Combining the blow-up theory of a sequence of maps with uniformly \(L^2\)-bounded tension fields from a closed Riemann surface (see [6, 20, 24, 26, 32]) and Theorem 5.1, we can easily get the conclusion of Theorem 1.1 by following the standard blow-up scheme in [6]. On the other hand, it is well known that harmonic spheres are minimal spheres and harmonic disks with free boundary on K are minimal disks with free boundary on K (see e.g. the proof of Theorem B in [27], page 300). \(\square \) 6 Application to the harmonic map flow with free boundary In this section, we will apply the results in Theorem 1.1 to the harmonic map flow with free boundary and prove Theorem 1.2 and Theorem 1.3. Firstly, we have Lemma 6.1 Let \(u:M\times (0,\infty )\rightarrow N\) be a global weak solution to (1.7-1.10), which is smooth away from a finite number of singular points. There holds the estimate $$\begin{aligned} \int _0^\infty \int _M|\partial _tu|^2dxdt\le E(u_0). \end{aligned}$$ (6.1) Moreover, \(E(u(\cdot ,t))\) is continuous on \([0,\infty )\) and non-increasing. Proof The proof is similar to Lemma 3.4 in [44]. Multiply the equation (1.7) by \(\partial _t u\) and integrate by parts, for any \(0\le t_1\le t_2\le \infty \), to get $$\begin{aligned} \int _{t_1}^{t_2}\int _M|\partial _tu|^2dxdt&=\int _{t_1}^{t_2}\int _M-\Delta _g u\cdot \partial _tudxdt\\&=\int _{t_1}^{t_2}\int _{\partial M}\frac{\partial u}{\partial \overrightarrow{n}}\cdot \partial _tu-\int _{t_1}^{t_2}\int _M\nabla u\cdot \nabla (\partial _tu)dxdt\\&=-\int _{t_1}^{t_2}\int _M\frac{1}{2}\partial _t|\nabla u|^2dxdt=E(u(\cdot ,t_1))-E(u(\cdot ,t_2)), \end{aligned}$$ where \(\overrightarrow{n}\) is the outward unit normal vector field on \(\partial M\) and we used the free boundary condition that \(\frac{\partial u}{\partial \overrightarrow{n}}\bot \partial _tu\). Then the conclusion of the lemma follows immediately. \(\square \) Similar to the case of a closed domain (see Lemma 2.5 in [24]), we have Lemma 6.2 Let \(u\in C^\infty (M\times (0,T_0),N)\) be a solution to (1.7–1.10). Then there exists a constant \(R_0>0\) such that, for any \(x_0\in M\), \(0<t\le s<T_0\) and \(0<R\le R_0\), there hold: $$\begin{aligned} E(u(s);B^M_{R}(x_0))\le E(u(t);B^M_{2R}(x_0))+C\frac{s-t}{R^2}E(u_0), \end{aligned}$$ (6.2) and $$\begin{aligned} E(u(t);B^M_{R}(x_0))\le E(u(s);B^M_{2R}(x_0))+C\int _t^s\int _M|\partial _tu|^2dxdt+C\frac{s-t}{R^2}E(u_0). \end{aligned}$$ (6.3) Proof Let \(\eta \in C^\infty _0(B^M_{2R}(x_0))\) be such that \(0\le \eta \le 1\), \(\eta |_{B^M_{R}(x_0)}\equiv 1\) and \(|\nabla \eta |\le \frac{C}{R}\). Multiplying (1.7) by \(\eta ^2\partial _t u\) and integrating by parts, we get $$\begin{aligned} \int _M|\partial _tu|^2\eta ^2dx+\frac{d}{dt}(\frac{1}{2}\int _M|\nabla u|^2\eta ^2dx)&=\int _{\partial M}\frac{\partial u}{\partial \overrightarrow{n}}\cdot \partial _t u\eta ^2-2\int _M\partial _tu\nabla u\eta \nabla \eta dx\\&=-2\int _M\partial _tu\nabla u\eta \nabla \eta dx, \end{aligned}$$ where we used the free boundary condition that \(\frac{\partial u}{\partial \overrightarrow{n}}\bot \partial _tu\). Since $$\begin{aligned} |2\int _M\partial _tu\nabla u\eta \nabla \eta dx|\le \frac{1}{2}\int _M|\partial _tu|^2\eta ^2dx+2\int _M|\nabla u|^2|\nabla \eta |^2dx, \end{aligned}$$ we have $$\begin{aligned}&-\frac{3}{2}\int _M|\partial _tu|^2\eta ^2dx-2\int _M|\nabla u|^2|\nabla \eta |^2dx\le \frac{d}{dt}\left( \frac{1}{2}\int _M|\nabla u|^2\eta ^2dx\right) \\&\quad \le 2\int _M|\nabla u|^2|\nabla \eta |^2dx. \end{aligned}$$ Integrating the above inequality from t to s, we will get the conclusion of the lemma. \(\square \) With the help of Lemma 6.2, we can apply the standard argument for the closed case (see Lemma 4.1 in [24]) to obtain the following: Lemma 6.3 Let \(u\in C^\infty (M\times (0,T_0),N)\) be a solution to (1.7–1.10). Suppose \(x_0\in M\) is the only singular point at time \(T_0\). Then there exists a positive number \(m>0\) such that $$\begin{aligned} |\nabla u|^2(x,t)dx\rightarrow m\delta _{x_0}+|\nabla u|^2(x,T_0)dx, \end{aligned}$$ (6.4) for \(t \uparrow T_0\), as Radon measures. Here, \(\delta _{x_0}\) denotes the \(\delta -\)mass at \(x_0\). Now, we begin to prove Theorems 1.2 and 1.3. Firstly, it is easy to see that Lemmas 6.1, 6.3 and Theorem 1.1 imply Theorem 1.2. In fact, Proof of Theorem 1.2 By Lemma 6.1, we can find a sequence \(t_n\uparrow \infty \) such that $$\begin{aligned} \lim _{n\rightarrow \infty }\int _M|\partial _t u|^2(\cdot ,t_n)dx=0\quad and \quad E(u(\cdot ,t_n))\le E(u_0). \end{aligned}$$ Take the sequence \(u_n=u(\cdot ,t_n)\), \(\tau (u_n)=\partial _tu(\cdot ,t_n)\) in Theorem 1.1. Combining this with Lemma 6.3, the conclusion of Theorem 1.2 follows immediately. \(\square \) Proof of Theorem 1.3 It is sufficient to consider the case that \((x_0,T_0)\) with \(x_0\in \partial M\) being the only singular point at time \(T_0\). For the case of an interior singularity \(x_0\in M{\setminus }\partial M\), one can refer to [24]. Without loss of generality, we may assume \(M=D^+_1(0)\) and \(x_0=0\). By Lemma 6.3, there exist sequences \(t_n\uparrow T_0\) and \(\lambda _n\downarrow 0\) such that $$\begin{aligned} \lim _{n\rightarrow \infty }\int _{D^+_{\lambda _n}(0)}|\nabla u|^2(\cdot ,t_n)dx=m. \end{aligned}$$ Let \(u_n(x,t)=u(\lambda _nx,t_n+\lambda _n^2t).\) Without loss of generality, we may assume \(t_n-2\lambda _n^2>0\). Then \(u_n\) is defined in \(D^+_{\lambda _n^{-1}}(0)\times [-2,0]\) satisfying (1.7) and $$\begin{aligned} \int _{-2}^0\int _{D^+_{\lambda _n^{-1}}(0)}|\partial _tu_n|^2dxdt=\int _{t_n-2\lambda _n^2}^{t_n}\int _{D^+_1(0)}|\partial _tu|^2dxdt\rightarrow 0 \end{aligned}$$ as \(n\rightarrow \infty \). By Fubini's theorem, there exists \(s_n\in (-1,-\frac{1}{2})\) such that $$\begin{aligned} \lim _{n\rightarrow \infty }\int _{D^+_{\lambda _n^{-1}}(0)}|\partial _tu_n|^2(\cdot ,s_n)dx= 0. \end{aligned}$$ (6.5) For the sequence \(\{u_n(\cdot ,s_n)\}\), there holds $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{n\rightarrow \infty }\int _{D^+_R(0)}|\nabla u_n|^2(\cdot ,s_n)dx=m. \end{aligned}$$ (6.6) In fact, on the one hand, by (6.2), we have $$\begin{aligned} \int _{D^+_R(0)}|\nabla u_n|^2(\cdot ,s_n)dx&=\int _{D^+_{\lambda _nR}(0)}|\nabla u|^2(\cdot ,t_n+\lambda _n^2s_n)dx \\&\ge \int _{D^+_{\lambda _n}(0)}|\nabla u|^2(\cdot ,t_n)dx-C\frac{1}{R^2}E(u_0). \end{aligned}$$ Thus, $$\begin{aligned} \lim _{R\rightarrow \infty }\lim _{n\rightarrow \infty }\int _{D^+_R(0)}|\nabla u_n|^2(\cdot ,s_n)dx\ge m. \end{aligned}$$ (6.7) On the other hand, by (6.4), for any \(R>0\) and \(\sigma >0\), we have $$\begin{aligned} \lim _{n\rightarrow \infty }\int _{D^+_{\lambda _nR}(0)}|\nabla u|^2(\cdot ,t_n+\lambda _n^2s_n)dx&\le \lim _{n\rightarrow \infty }\int _{D^+_{\sigma }(0)}|\nabla u|^2(\cdot ,t_n+\lambda _n^2s_n)dx\\&=m+\int _{D^+_{\sigma }(0)}|\nabla u|^2(\cdot ,T_0)dx. \end{aligned}$$ Letting \(\sigma \rightarrow 0\), we obtain $$\begin{aligned} \lim _{n\rightarrow \infty }\int _{D^+_R(0)}|\nabla u_n|^2(\cdot ,s_n)dx\le m \end{aligned}$$ (6.8) and (6.6) follows immediately. Fixing \(R>0\), we consider the sequence \(\{u_n(\cdot ,s_n)\}_{n=1}^\infty \) which is defined in \(D_R^+(0)\). By (6.8) and (6.5), we know it is a sequence of maps from \(D_R^+(0)\) to N with finite energy and tension fields $$\begin{aligned} \Vert \tau _n\Vert _{L^2(D^+_R(0))}=\Vert \partial _tu_n(\cdot ,s_n)\Vert _{L^2(D^+_R(0))}\rightarrow 0 \end{aligned}$$ as \(n\rightarrow \infty \). Moreover, for each \(R>0\), \(u_n(\cdot ,s_n)\) weakly converges to a constant map. In fact, by Lemma 6.3, for any \(\sigma >0\), we have $$\begin{aligned} \lim _{n\rightarrow \infty }E(u_n(\cdot ,s_n),D_R^+{\setminus } D^+_\sigma )&=\lim _{n\rightarrow \infty }E(u(\cdot ,t_n+\lambda _n^2 s_n),D_{\lambda _n R}^+{\setminus } D^+_{\lambda _n\sigma })\\&\le \lim _{n\rightarrow \infty }E(u(\cdot ,T_0),D_{\lambda _n R})=0. \end{aligned}$$ According to Theorem 5.1, we know there exist \(L_R\) nontrivial bubbles \(\{w^i_R\}_{i=1}^{L_R}\) such that $$\begin{aligned} \lim _{n\rightarrow \infty }E(u_n(\cdot ,s_n),D_R^+)=\sum _{i=1}^{L_R}E(w_R^i). \end{aligned}$$ (6.9) Since the energy of the bubble has a lower bound, i.e. \(E(w)\ge \overline{\epsilon _0}:=\min \{\epsilon _0,\epsilon _5\}\), we have \(1\le L_R\le \frac{m}{\overline{\epsilon _0}}+1\). Therefore, there exist a subsequence \(R\uparrow \infty \) and a constant \(L\in [1,\frac{m}{\overline{\epsilon _0}}+1]\) such that \(L_R=L\) and $$\begin{aligned} m=\lim _{R\rightarrow \infty }\lim _{n\rightarrow \infty }E(u_n(\cdot ,s_n),D_R^+) =\lim _{R\rightarrow \infty }\sum _{i=1}^{L}E(w_R^i). \end{aligned}$$ (6.10) Using Theorem 1.1 with \(M=S^2\) or \(M=D\) and \(\tau \equiv 0\), there exist \(L_i\) bubbles \(\{w^j\}_{j=1}^{L_i}\) such that $$\begin{aligned} \lim _{R\rightarrow \infty }E(w_R^i)=\sum _{j=1}^{L_i}E(w^j). \end{aligned}$$ Then $$\begin{aligned} m=\lim _{R\rightarrow \infty }\lim _{n\rightarrow \infty }E(u_n(\cdot ,s_n),D_R^+) =\lim _{R\rightarrow \infty }\sum _{i=1}^{L}E(w_R^i)=\sum _{i=1}^{L}\sum _{j=1}^{L_i}E(w^j).\qquad \end{aligned}$$ (6.11) Combining with Lemma 6.3, we obtain the conclusion of Theorem 1.3. \(\square \) Notes Acknowledgements Open access funding provided by Max Planck Society. References 1. Chang, K.C.: Heat flow and boundary value problem for harmonic maps. Ann. Inst. Henri. Poincaré, Anal. Non Lineaire 6(5), 363–395 (1989)MathSciNetCrossRefGoogle Scholar 2. Chen, Y., Lin, F.: Evolution equations with a free boundary condition. J. Geom. Anal. 8(2), 179–197 (1998)MathSciNetCrossRefGoogle Scholar 3. Chen, Q., Jost, J., Wang, G., Zhu, M.: The boundary value problem for Dirac-harmonic maps. J. Eur. Math. Soc. (JEMS) 15(3), 997–1031 (2013)MathSciNetCrossRefGoogle Scholar 4. Colding, T., Minicozzi, W.: Width and finite extinction time of Ricci flow. Geom. Topol. 12(5), 2537–2586 (2008)MathSciNetCrossRefGoogle Scholar 5. Ding, W.: Lectures on Heat Flow of Harmonic Maps. Lecture notes at CTS, NTHU, Taiwan (1998)Google Scholar 6. Ding, W., Tian, G.: Energy identity for a class of approximate harmonic maps from surfaces. Comm. Anal. Geom. 3(3–4), 543–554 (1995)MathSciNetCrossRefGoogle Scholar 7. Frauenfelder, U.: Gromov convergence of pseudoholomorphic disks. J. Fixed Point Theory Appl. 3(2), 215–271 (2008)MathSciNetCrossRefGoogle Scholar 8. Gulliver, R., Jost, J.: Harmonic maps which solve a free-boundary problem. J. Reine Angew. Math. 381, 61–89 (1987)MathSciNetzbMATHGoogle Scholar 9. Hamilton, R.: Harmonic Maps of Manofolds with Boundary. L. N. in Math. 471 Springer, New York (1975)Google Scholar 10. Hélein, F.: Harmonic maps, conservation laws and moving frames, volume 150 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, second edition, 2002. Translated from the 1996 French original, With a foreword by James Eells. (1983)Google Scholar 11. Hong, M., Yin, H.: On the Sacks-Uhlenbeck flow of Riemannian surfaces. Comm. Anal. Geom. 21(5), 917–955 (2013)MathSciNetCrossRefGoogle Scholar 12. Ivashkovich, S., Shevchishin, V.: Gromov compactness theorem for J-complex curves with boundary. Int. Math. Res. Notices (22), 1167–1206 (2000)Google Scholar 13. Jost, J.: Two-Dimensional Geometric Variational Problems. Wiley, New York (1991)zbMATHGoogle Scholar 14. Jost, J.: Geometry and Physics. Springer, New York (2009)CrossRefGoogle Scholar 15. Lamm, T.: Energy identity for approximations of harmonic maps from surfaces. Trans. Am. Math. Soc. 362, 4077–4097 (2010)MathSciNetCrossRefGoogle Scholar 16. Lamm, T., Sharp, B.: Global estimates and energy identities for elliptic systems with antisymmetric potentials. Comm. Part. Differ. Equ. 41, 579–608 (2016)MathSciNetCrossRefGoogle Scholar 17. Laurain, P., Petrides, R.: Regularity and quantification for harmonic maps with free boundary. Adv. Calc. Var. 10(1), 69–82 (2017)MathSciNetCrossRefGoogle Scholar 18. Laurain, P., Rivière, T.: Angular energy quantization for linear elliptic systems with antisymmetric potentials and applications. Anal. PDE 7(1), 1–41 (2014)MathSciNetCrossRefGoogle Scholar 19. Li, J.: Heat flows and harmonic maps with a free boundary. Math. Z. 217(3), 487–495 (1994)MathSciNetCrossRefGoogle Scholar 20. Li, J., Zhu, X.: Energy identity for the maps from a surface with tension field bounded in \(L^p\). Pac. J. Math. 260(1), 181–195 (2012)CrossRefGoogle Scholar 21. Li, J., Zhu, X.: Small energy compactness for approximate harmonic mappings. Commun. Contemp. Math. 13(5), 741–763 (2011)MathSciNetCrossRefGoogle Scholar 22. Li, Y., Wang, Y.: Bubbling location for sequences of approximated f-harmonic maps from surfaces. Internat. J. Math. 21(4), 475–495 (2010)MathSciNetCrossRefGoogle Scholar 23. Li, Y., Wang, Y.: A weak energy identity and the length of necks for a sequence of Sacks-Uhlenbeck \(\alpha \)-harmonic maps. Adv. Math. 225(3), 1134–1184 (2010)MathSciNetCrossRefGoogle Scholar 24. Lin, F., Wang, C.: Energy identity of harmonic map flow from surfaces at finite singular time. Calc. Var. Par. Differ. Equ. 6, 369–380 (1998)MathSciNetCrossRefGoogle Scholar 25. Lin, F., Rivière, T.: Energy quantization for harmonic maps. Duke Math. J. 111(1), 177–193 (2002)MathSciNetCrossRefGoogle Scholar 26. Luo, Y.: Energy identity and removable singularities of maps from a Riemannian surface with tension field unbounded in \(L^2\). Pac. J. Math. 256(2), 365–380 (2012)CrossRefGoogle Scholar 27. Ma, L.: Harmonic map heat flow with free boundary. Comm. Math. Hel. 66, 279–301 (1991)MathSciNetCrossRefGoogle Scholar 28. McDuff, D., Salamon, D.: J-Holomorphic Curves and Symplectic Topology. AMS Colloquium Publications, New York (2004)CrossRefGoogle Scholar 29. Parker, T.: Bubble tree convergence for harmonic maps. J. Diff. Geom. 44(3), 595–633 (1996)MathSciNetCrossRefGoogle Scholar 30. Parker, T., Wolfson, J.: Pseudo-holomorpohic maps and bubble trees. J. Geometr. Anal. 3(1), 63–98 (1993)CrossRefGoogle Scholar 31. Qing, J.: On singularities of the heat flow for harmonic maps from surface into spheres. Comm. Anal. Geom. 3(1–2), 297–315 (1995)MathSciNetCrossRefGoogle Scholar 32. Qing, J., Tian, G.: Bubbling of the heat flows for harmonic maps from surfaces. Commun. Pure Appl. Math. 50(4), 295–310 (1997)MathSciNetCrossRefGoogle Scholar 33. Rivière, T.: Conservation laws for conformally invariant variational problems. Invent. Math. 168, 1–22 (2007)MathSciNetCrossRefGoogle Scholar 34. Rivière, T.: Conformally Invariant 2-Dimensional Variational Problems Cours joint de l'Institut Henri Poincaré—Paris XII Creteil (2010)Google Scholar 35. Rivière, T., Struwe, M.: Partial regularity for harmonic maps and related problems. Comm. Pure Appl. Math. 61(4), 451–463 (2008)MathSciNetCrossRefGoogle Scholar 36. Roger, M.: An \(L^p\) regularity theory for harmonic maps. Trans. Am. Math. Soc. 367(1), 1–30 (2015)zbMATHGoogle Scholar 37. Rupflin, M.: An improved uniqueness result for the harmonic map flow in two dimensions. Calc. Var. Par. Differ. Equ. 33(3), 329–341 (2008)MathSciNetCrossRefGoogle Scholar 38. Sacks, J., Uhlenbeck, K.: The existence of minimal immersions of 2-spheres. Ann. Math. 113, 1–24 (1981)MathSciNetCrossRefGoogle Scholar 39. Scheven, C.: Partial regularity for stationary harmonic maps at a free boundary. Math. Z. 253(1), 135–157 (2006)MathSciNetCrossRefGoogle Scholar 40. Schikorra, A.: A remark on gauge transformations and the moving frame method. Ann. Inst. H. Poincaré Anal. Non Linéaire 27(2), 503–515 (2010)MathSciNetCrossRefGoogle Scholar 41. Sharp, B.: Higher integrability for solutions to a system of critical elliptic PDE. Methods. Appl. Anal. 21(2), 221–240 (2014)MathSciNetzbMATHGoogle Scholar 42. Sharp, B., Topping, P.: Decay estimates for Rivière's equation, with applications to regularity and compactness. Trans. Am. Math. Soc. 365(5), 2317–2339 (2013)CrossRefGoogle Scholar 43. Sharp, B., Zhu, M.: Regularity at the free boundary for Dirac-harmonic maps from surfaces. Calc. Var. Par. Differ. Equ. 55(2), 55:27 (2016)MathSciNetzbMATHGoogle Scholar 44. Struwe, M.: On the evolution of harmonic mappings of Riemannian surfaces. Comm. Math. Helv. 60, 558–581 (1985)MathSciNetCrossRefGoogle Scholar 45. Struwe, M.: The existence of surfaces of constant mean curvature with free boundaries. Acta Math. 160, 19–64 (1988)MathSciNetCrossRefGoogle Scholar 46. Struwe, M.: The evolution of harmonic mappings with free boundaries. Manuscripta Math 70, 373–384 (1991)MathSciNetCrossRefGoogle Scholar 47. Topping, P.: Repulsion and quantization in almost-harmonic maps, and asymptotics of the harmonic flow. Ann. Math. (2) 159(2), 465–534 (2004)MathSciNetCrossRefGoogle Scholar 48. Wang, C.: Bubbling phenomena of certain Palais-Smale sequences from surfaces to general targets. Houston J. Math V22, N3 (1996)Google Scholar 49. Wang, W., Wei, D., Zhang, Z.: Energy identity for approximate harmonic maps from surface to general targets. J. Funct. Anal. 272(2), 776–803 (2017)MathSciNetCrossRefGoogle Scholar 50. Wehrheim, K.: Uhlenbeck Compactness. EMS Series of lectures in mathematics, European Mathematics Society (EMS), Zurich (2004)Google Scholar 51. Wehrheim, K.: Energy quantization and mean value inequalities for nonlinear boundary value problems. J. Eur. Math. Soc. (JEMS) 7(3), 305–318 (2005)MathSciNetCrossRefGoogle Scholar 52. Wolfson, J.: Gromov's compactness of pseudo-holomorphic curves and symplectic geometry. J. Differ. Geom. 28(3), 383–405 (1988)MathSciNetCrossRefGoogle Scholar 53. Ye, R.: Gromov's compactness theorem for pseudo-holomorphic curves. Trans. Am. Math. Soc. 342(2), 671–694 (1994)MathSciNetzbMATHGoogle Scholar 54. Zhu, M.: Regularity for harmonic maps into certain Pseudo-Riemannian manifolds. J. Math. Pures Appl. 99(1), 106–123 (2013)MathSciNetCrossRefGoogle Scholar Copyright information © The Author(s) 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Authors and Affiliations Jürgen Jost12Email authorView author's OrcID profileLei Liu13Miaomiao Zhu4View author's OrcID profile1.Max Planck Institute for Mathematics in the SciencesLeipzigGermany2.Department of MathematicsLeipzig UniversityLeipzigGermany3.Department of MathematicsTsinghua UniversityBeijingChina4.School of Mathematical Sciences, Shanghai Jiao Tong UniversityShanghaiChina
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00208-018-1759-8.pdf
Jürgen Jost, Lei Liu, Miaomiao Zhu. The qualitative behavior at the free boundary for approximate harmonic maps from surfaces, Mathematische Annalen, 2018, 1-45, DOI: 10.1007/s00208-018-1759-8 | CommonCrawl |
On the Homology of Certain Smooth Covers of Moduli Spaces of Algebraic Curves
Differential Geometry and its Application. 2015. Vol. 40. P. 86-102.
Dunin-Barkowski P., Popolitov A., Shabat G., Sleptsov A.
We suggest a general method of computation of the homology of certain smooth covers $\widehat{\mathcal{M}}_{g,1}(\mathbb{C})$ of moduli spaces $\mathcal{M}_{g,1}\br{\mathbb{C}}$ of pointed curves of genus $g$. Namely, we consider moduli spaces of algebraic curves with level $m$ structures. The method is based on the lifting of the Strebel-Penner stratification of $\mathcal{M}_{g,1}\br{\mathbb{C}}$. We apply this method for $g\leq 2$ and obtain Betti numbers; these results are consistent with Penner and Harer-Zagier results on Euler characteristics.
Research target: Mathematics
Priority areas: mathematics
Keywords: moduli space of algebraic curves
Publication based on the results of: Теория представлений и математическая физика(2015)
Сабейские этюды
Коротаев А. В. М.: Восточная литература, 1997.
Computation of the first Stiefel-Whitney class for the variety $\overline{{\mathcal M}_{0,n}^{\mathbb R}}$
N. Ya. Amburg, Kreines E. M. arxiv.org. math. Cornell University, 2014. No. 1410.4372.
We compute the class which is Poincare dual to the first Stiefel-Whitney class for the Deligne-Mumford compactification of the moduli space of real algebraic curves of genus 0 with n marked and numbered points in terms of the natural cell decomposition of the variety under consideration.
Dynamics of Information Systems: Mathematical Foundations
Iss. 20. NY: Springer, 2012.
This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Moduli spaces of nonspecial pointed curves of arithmetic genus 1
Polishchuk A. Mathematische Annalen. 2017. P. 1-40.
In this paper we study the moduli stack M_{1,n} of curves of arithmetic genus 1 with n marked points, forming a nonspecial divisor. In Polishchuk (A modular compactification of M_{1,n} from A∞-structures, arXiv:1408.0611) this stack was realized as the quotient of an explicit scheme U^{ns}_{1,n} affine of finite type over ℙn−1, by the action of 𝔾m^n . Our main result is an explicit description of the corresponding GIT semistable loci in U^{ns}_{1,n}. This allows us to identify some of the GIT quotients with some of the modular compactifications of M_{1,n} defined in Smyth (Invent Math 192:459–503, 2013; Compos Math 147(3):877–913, 2011).
Birational models of M_2,2 arising as moduli of curves with nonspecial divisors
Polishchuk A., Johnson D. math. arxive. Cornell University, 2018
We study birational projective models of M_2,2 obtained from the moduli space of curves with nonspecial divisors. We describe geometrically which singular curves appear in these models and show that one of them is obtained by blowing down the Weierstrass divisor in the moduli stack of Z-stable curves \bar{M}_2,2(Z) defined by Smyth. As a corollary, we prove projectivity of the coarse moduli space \bar{M}_2,2(Z).
Вещественно-нормированные дифференциалы и гипотеза Арбарелло
Кричевер И. М. Функциональный анализ и его приложения. 2012. Т. 46. № 2. С. 37-51.
Using meromorphic differentials with real periods, we prove Arbarello's conjecture that any compact complex cycle of dimension g−n in the moduli space M_g of smooth algebraic curves of genus g must intersect the locus of curves having a Weierstrass point of order at most n.
A modular compactification of M_{1,n} from A_infty-structures
Polishchuk A., Lekili Y. Journal fuer die reine und angewandte Mathematik. 2017.
We show that a certain moduli space of minimal A∞-structures coincides with the modular compactification ℳ_{1,n(n−1)} of ℳ_{1,n} constructed by Smyth in [26]. In addition, we describe these moduli spaces and the universal curves over them by explicit equations, prove that they are normal and Gorenstein, show that their Picard groups have no torsion and that they have rational singularities if and only if n≤11.
Model for organizing cargo transportation with an initial station of departure and a final station of cargo distribution
Khachatryan N., Akopov A. S. Business Informatics. 2017. No. 1(39). P. 25-35.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the "correct" extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Nullstellensatz over quasi-fields
Trushin D. Russian Mathematical Surveys. 2010. Vol. 65. No. 1. P. 186-187.
Деловой климат в оптовой торговле во II квартале 2014 года и ожидания на III квартал
Лола И. С., Остапкович Г. В. Современная торговля. 2014. № 10.
Прикладные аспекты статистики и эконометрики: труды 8-ой Всероссийской научной конференции молодых ученых, аспирантов и студентов
Вып. 8. МЭСИ, 2011.
Laminations from the Main Cubioid
Timorin V., Blokh A., Oversteegen L. et al. arxiv.org. math. Cornell University, 2013. No. 1305.5788.
According to a recent paper \cite{bopt13}, polynomials from the closure $\ol{\phd}_3$ of the {\em Principal Hyperbolic Domain} ${\rm PHD}_3$ of the cubic connectedness locus have a few specific properties. The family $\cu$ of all polynomials with these properties is called the \emph{Main Cubioid}. In this paper we describe the set $\cu^c$ of laminations which can be associated to polynomials from $\cu$.
Entropy and the Shannon-McMillan-Breiman theorem for beta random matrix ensembles
Bufetov A. I., Mkrtchyan S., Scherbina M. et al. arxiv.org. math. Cornell University, 2013. No. 1301.0342.
Bounded limit cycles of polynomial foliations of ℂP²
Goncharuk N. B., Kudryashov Y. arxiv.org. math. Cornell University, 2015. No. 1504.03313.
In this article we prove in a new way that a generic polynomial vector field in ℂ² possesses countably many homologically independent limit cycles. The new proof needs no estimates on integrals, provides thinner exceptional set for quadratic vector fields, and provides limit cycles that stay in a bounded domain.
Метод параметрикса для диффузий и цепей Маркова
Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012.
Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action?
Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem
Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1.
We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results.
Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау
Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214.
Hypercommutative operad as a homotopy quotient of BV
Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749.
We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.
Cross-sections, quotients, and representation rings of semisimple algebraic groups
V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Математическое моделирование социальных процессов
Edited by: А. Михайлов Вып. 14. М.: Социологический факультет МГУ, 2012. | CommonCrawl |
Chemical element with atomic number 42
Chemical element, symbol Mo and atomic number 42
Molybdenum, 42Mo
/məˈlɪbdənəm/ (mə-LIB-də-nəm)
Standard atomic weight Ar°(Mo)
95.95±0.01
95.95±0.01 (abridged)[1]
Molybdenum in the periodic table
Hydrogen Helium
Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon
Sodium Magnesium Aluminium Silicon Phosphorus Sulfur Chlorine Argon
Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton
Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon
Caesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury (element) Thallium Lead Bismuth Polonium Astatine Radon
Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Dubnium Seaborgium Bohrium Hassium Meitnerium Darmstadtium Roentgenium Copernicium Nihonium Flerovium Moscovium Livermorium Tennessine Oganesson
niobium ← molybdenum → technetium
Atomic number (Z)
d-block
Electron configuration
[Kr] 4d5 5s1
Electrons per shell
2, 8, 18, 13, 1
Phase at STP
Melting point
2896 K (2623 °C, 4753 °F)
Boiling point
Density (near r.t.)
10.28 g/cm3
when liquid (at m.p.)
Heat of fusion
37.48 kJ/mol
Heat of vaporization
598 kJ/mol
Molar heat capacity
24.06 J/(mol·K)
P (Pa)
at T (K)
Atomic properties
Oxidation states
−4, −2, −1, 0, +1,[2] +2, +3, +4, +5, +6 (a strongly acidic oxide)
Pauling scale: 2.16
Ionization energies
1st: 684.3 kJ/mol
2nd: 1560 kJ/mol
3rd: 2618 kJ/mol
Atomic radius
empirical: 139 pm
Covalent radius
154±5 pm
Spectral lines of molybdenum
Natural occurrence
Crystal structure
body-centered cubic (bcc)
Speed of sound thin rod
5400 m/s (at r.t.)
Thermal expansion
4.8 µm/(m⋅K) (at 25 °C)
Thermal conductivity
138 W/(m⋅K)
Thermal diffusivity
54.3 mm2/s (at 300 K)[3]
Electrical resistivity
53.4 nΩ⋅m (at 20 °C)
Magnetic ordering
paramagnetic[4]
Molar magnetic susceptibility
+89.0×10−6 cm3/mol (298 K)[5]
Young's modulus
329 GPa
Shear modulus
Bulk modulus
Poisson ratio
Mohs hardness
Vickers hardness
1400–2740 MPa
Brinell hardness
CAS Number
Carl Wilhelm Scheele (1778)
First isolation
Peter Jacob Hjelm (1781)
Main isotopes of molybdenum
Isotope
abundance
half-life (t1/2)
product
14.65% stable
syn 4×103 y ε 93Nb
9.19% stable
syn 65.94 h β− 99mTc
γ –
9.74% 7.1×1018 y β−β− 100Ru
Category: Molybdenum
| references
Molybdenum is a chemical element with the symbol Mo and atomic number 42 which is located in period 5 and group 6. The name is from Neo-Latin molybdaenum, which is based on Ancient Greek Μόλυβδος molybdos, meaning lead, since its ores were confused with lead ores.[6] Molybdenum minerals have been known throughout history, but the element was discovered (in the sense of differentiating it as a new entity from the mineral salts of other metals) in 1778 by Carl Wilhelm Scheele. The metal was first isolated in 1781 by Peter Jacob Hjelm.[7]
Molybdenum does not occur naturally as a free metal on Earth; it is found only in various oxidation states in minerals. The free element, a silvery metal with a grey cast, has the sixth-highest melting point of any element. It readily forms hard, stable carbides in alloys, and for this reason most of the world production of the element (about 80%) is used in steel alloys, including high-strength alloys and superalloys.
Most molybdenum compounds have low solubility in water, but when molybdenum-bearing minerals contact oxygen and water, the resulting molybdate ion MoO2−
4 is quite soluble. Industrially, molybdenum compounds (about 14% of world production of the element) are used in high-pressure and high-temperature applications as pigments and catalysts.
Molybdenum-bearing enzymes are by far the most common bacterial catalysts for breaking the chemical bond in atmospheric molecular nitrogen in the process of biological nitrogen fixation. At least 50 molybdenum enzymes are now known in bacteria, plants, and animals, although only bacterial and cyanobacterial enzymes are involved in nitrogen fixation. These nitrogenases contain an iron-molybdenum cofactor FeMoco, which is believed to contain either Mo(III) or Mo(IV).[8][9] This is distinct from the fully oxidized Mo(VI) found complexed with molybdopterin in all other molybdenum-bearing enzymes, which perform a variety of crucial functions.[10] The variety of crucial reactions catalyzed by these latter enzymes means that molybdenum is an essential element for all higher eukaryote organisms, including humans.
In its pure form, molybdenum is a silvery-grey metal with a Mohs hardness of 5.5 and a standard atomic weight of 95.95 g/mol.[11][12] It has a melting point of 2,623 °C (4,753 °F); of the naturally occurring elements, only tantalum, osmium, rhenium, tungsten, and carbon have higher melting points.[6] It has one of the lowest coefficients of thermal expansion among commercially used metals.[13]
Molybdenum is a transition metal with an electronegativity of 2.16 on the Pauling scale. It does not visibly react with oxygen or water at room temperature. Weak oxidation of molybdenum starts at 300 °C (572 °F); bulk oxidation occurs at temperatures above 600 °C, resulting in molybdenum trioxide. Like many heavier transition metals, molybdenum shows little inclination to form a cation in aqueous solution, although the Mo3+ cation is known under carefully controlled conditions.[14]
Gaseous molybdenum consists of the diatomic species Mo2. That molecule is a singlet, with two unpaired electrons in bonding orbitals, in addition to 5 conventional bonds. The result is a sextuple bond.[15][16]
Main article: Isotopes of molybdenum
There are 35 known isotopes of molybdenum, ranging in atomic mass from 83 to 117, as well as four metastable nuclear isomers. Seven isotopes occur naturally, with atomic masses of 92, 94, 95, 96, 97, 98, and 100. Of these naturally occurring isotopes, only molybdenum-100 is unstable.[17]
Molybdenum-98 is the most abundant isotope, comprising 24.14% of all molybdenum. Molybdenum-100 has a half-life of about 1019 y and undergoes double beta decay into ruthenium-100. All unstable isotopes of molybdenum decay into isotopes of niobium, technetium, and ruthenium. Of the synthetic radioisotopes, the most stable is 93Mo, with a half-life of 4,000 years.[18]
The most common isotopic molybdenum application involves molybdenum-99, which is a fission product. It is a parent radioisotope to the short-lived gamma-emitting daughter radioisotope technetium-99m, a nuclear isomer used in various imaging applications in medicine.[19] In 2008, the Delft University of Technology applied for a patent on the molybdenum-98-based production of molybdenum-99.[20]
See also: Category:Molybdenum compounds
Molybdenum forms chemical compounds in oxidation states −IV and from −II to +VI. Higher oxidation states are more relevant to its terrestrial occurrence and its biological roles, mid-level oxidation states are often associated with metal clusters, and very low oxidation states are typically associated with organomolybdenum compounds. Mo and W chemistry shows strong similarities. The relative rarity of molybdenum(III), for example, contrasts with the pervasiveness of the chromium(III) compounds. The highest oxidation state is seen in molybdenum(VI) oxide (MoO3), whereas the normal sulfur compound is molybdenum disulfide MoS2.[21]
Example[22][23]
−4 Na
4[Mo(CO)
2[Mo
2(CO)
10]
0 Mo(CO)
+1 Na[C
6Mo]
+2 MoCl
+3 MoBr
+4 MoS
+6 MoF
Keggin structure of the phosphomolybdate anion (P[Mo12O40]3−), an example of a polyoxometalate
From the perspective of commerce, the most important compounds are molybdenum disulfide (MoS
2) and molybdenum trioxide (MoO
3). The black disulfide is the main mineral. It is roasted in air to give the trioxide:[21]
2 MoS
2 + 7 O
2 → 2 MoO
3 + 4 SO
The trioxide, which is volatile at high temperatures, is the precursor to virtually all other Mo compounds as well as alloys. Molybdenum has several oxidation states, the most stable being +4 and +6 (bolded in the table at left).
Molybdenum(VI) oxide is soluble in strong alkaline water, forming molybdates (MoO42−). Molybdates are weaker oxidants than chromates. They tend to form structurally complex oxyanions by condensation at lower pH values, such as [Mo7O24]6− and [Mo8O26]4−. Polymolybdates can incorporate other ions, forming polyoxometalates.[24] The dark-blue phosphorus-containing heteropolymolybdate P[Mo12O40]3− is used for the spectroscopic detection of phosphorus.[25] The broad range of oxidation states of molybdenum is reflected in various molybdenum chlorides:[21]
Molybdenum(II) chloride MoCl2, which exists as the hexamer Mo6Cl12 and the related dianion [Mo6Cl14]2-.
Molybdenum(III) chloride MoCl3, a dark red solid, which converts to the anion trianionic complex [MoCl6]3-.
Molybdenum(IV) chloride MoCl4, a black solid, which adopts a polymeric structure.
Molybdenum(V) chloride MoCl5 dark green solid, which adopts a dimeric structure.
Molybdenum(VI) chloride MoCl6 is a black solid, which is monomeric and slowly decomposes to MoCl5 and Cl2 at room temperature.[26]
Like chromium and some other transition metals, molybdenum forms quadruple bonds, such as in Mo2(CH3COO)4 and [Mo2Cl8]4−.[21][27] The Lewis acid properties of the butyrate and perfluorobutyrate dimers, Mo2(O2CR)4 and Rh2(O2CR) 4, have been reported.[28]
The oxidation state 0 and lower are possible with carbon monoxide as ligand, such as in molybdenum hexacarbonyl, Mo(CO)6.[21][29]
Molybdenite—the principal ore from which molybdenum is now extracted—was previously known as molybdena. Molybdena was confused with and often utilized as though it were graphite. Like graphite, molybdenite can be used to blacken a surface or as a solid lubricant.[30] Even when molybdena was distinguishable from graphite, it was still confused with the common lead ore PbS (now called galena); the name comes from Ancient Greek Μόλυβδος molybdos, meaning lead.[13] (The Greek word itself has been proposed as a loanword from Anatolian Luvian and Lydian languages).[31]
Although (reportedly) molybdenum was deliberately alloyed with steel in one 14th-century Japanese sword (mfd. ca. 1330), that art was never employed widely and was later lost.[32][33] In the West in 1754, Bengt Andersson Qvist examined a sample of molybdenite and determined that it did not contain lead and thus was not galena.[34]
By 1778 Swedish chemist Carl Wilhelm Scheele stated firmly that molybdena was (indeed) neither galena nor graphite.[35][36] Instead, Scheele correctly proposed that molybdena was an ore of a distinct new element, named molybdenum for the mineral in which it resided, and from which it might be isolated. Peter Jacob Hjelm successfully isolated molybdenum using carbon and linseed oil in 1781.[13][37]
For the next century, molybdenum had no industrial use. It was relatively scarce, the pure metal was difficult to extract, and the necessary techniques of metallurgy were immature.[38][39][40] Early molybdenum steel alloys showed great promise of increased hardness, but efforts to manufacture the alloys on a large scale were hampered with inconsistent results, a tendency toward brittleness, and recrystallization. In 1906, William D. Coolidge filed a patent for rendering molybdenum ductile, leading to applications as a heating element for high-temperature furnaces and as a support for tungsten-filament light bulbs; oxide formation and degradation require that molybdenum be physically sealed or held in an inert gas.[41] In 1913, Frank E. Elmore developed a froth flotation process to recover molybdenite from ores; flotation remains the primary isolation process.[42]
During World War I, demand for molybdenum spiked; it was used both in armor plating and as a substitute for tungsten in high-speed steels. Some British tanks were protected by 75 mm (3 in) manganese steel plating, but this proved to be ineffective. The manganese steel plates were replaced with much lighter 25 mm (1.0 in) molybdenum steel plates allowing for higher speed, greater maneuverability, and better protection.[13] The Germans also used molybdenum-doped steel for heavy artillery, like in the super-heavy howitzer Big Bertha,[43] because traditional steel melts at the temperatures produced by the propellant of the one ton shell.[44] After the war, demand plummeted until metallurgical advances allowed extensive development of peacetime applications. In World War II, molybdenum again saw strategic importance as a substitute for tungsten in steel alloys.[45]
Occurrence and production
Molybdenite on quartz
Molybdenum is the 54th most abundant element in the Earth's crust with an average of 1.5 parts per million and the 25th most abundant element in its oceans, with an average of 10 parts per billion; it is the 42nd most abundant element in the Universe.[13][46] The Russian Luna 24 mission discovered a molybdenum-bearing grain (1 × 0.6 µm) in a pyroxene fragment taken from Mare Crisium on the Moon.[47] The comparative rarity of molybdenum in the Earth's crust is offset by its concentration in a number of water-insoluble ores, often combined with sulfur in the same way as copper, with which it is often found. Though molybdenum is found in such minerals as wulfenite (PbMoO4) and powellite (CaMoO4), the main commercial source is molybdenite (MoS2). Molybdenum is mined as a principal ore and is also recovered as a byproduct of copper and tungsten mining.[6]
The world's production of molybdenum was 250,000 tonnes in 2011, the largest producers being China (94,000 t), the United States (64,000 t), Chile (38,000 t), Peru (18,000 t) and Mexico (12,000 t). The total reserves are estimated at 10 million tonnes, and are mostly concentrated in China (4.3 Mt), the US (2.7 Mt) and Chile (1.2 Mt). By continent, 93% of world molybdenum production is about evenly shared between North America, South America (mainly in Chile), and China. Europe and the rest of Asia (mostly Armenia, Russia, Iran and Mongolia) produce the remainder.[48]
World production trend
In molybdenite processing, the ore is first roasted in air at a temperature of 700 °C (1,292 °F). The process gives gaseous sulfur dioxide and the molybdenum(VI) oxide:[21]
2 MoS 2 + 7 O 2 ⟶ 2 MoO 3 + 4 SO 2 {\displaystyle {\ce {2MoS2 + 7O2 -> 2MoO3 + 4SO2}}}
The resulting oxide is then usually extracted with aqueous ammonia to give ammonium molybdate:
MoO 3 + 2 NH 3 + H 2 O ⟶ ( NH 4 ) 2 ( MoO 4 ) {\displaystyle {\ce {MoO3 + 2NH3 + H2O -> (NH4)2(MoO4)}}}
Copper, an impurity in molybdenite, is separated at this stage by treatment with hydrogen sulfide.[21] Ammonium molybdate converts to ammonium dimolybdate, which is isolated as a solid. Heating this solid gives molybdenum trioxide:[49]
( NH 4 ) 2 Mo 2 O 7 ⟶ 2 MoO 3 + 2 NH 3 + H 2 O {\displaystyle {\ce {(NH4)2Mo2O7 -> 2MoO3 + 2NH3 + H2O}}}
Crude trioxide can be further purified by sublimation at 1,100 °C (2,010 °F).
Metallic molybdenum is produced by reduction of the oxide with hydrogen:
MoO 3 + 3 H 2 ⟶ Mo + 3 H 2 O {\displaystyle {\ce {MoO3 + 3H2 -> Mo + 3H2O}}}
The molybdenum for steel production is reduced by the aluminothermic reaction with addition of iron to produce ferromolybdenum. A common form of ferromolybdenum contains 60% molybdenum.[21][50]
Molybdenum had a value of approximately $30,000 per tonne as of August 2009. It maintained a price at or near $10,000 per tonne from 1997 through 2003, and reached a peak of $103,000 per tonne in June 2005.[51] In 2008, the London Metal Exchange announced that molybdenum would be traded as a commodity.[52]
The Knaben mine in southern Norway, opened in 1885, was the first dedicated molybdenum mine. Closed in 1973 but reopened in 2007,[53] it now produces 100,000 kilograms (98 long tons; 110 short tons) of molybdenum disulfide per year. Large mines in Colorado (such as the Henderson mine and the Climax mine)[54] and in British Columbia yield molybdenite as their primary product, while many porphyry copper deposits such as the Bingham Canyon Mine in Utah and the Chuquicamata mine in northern Chile produce molybdenum as a byproduct of copper-mining.
A plate of molybdenum copper alloy
About 86% of molybdenum produced is used in metallurgy, with the rest used in chemical applications. The estimated global use is structural steel 35%, stainless steel 25%, chemicals 14%, tool & high-speed steels 9%, cast iron 6%, molybdenum elemental metal 6%, and superalloys 5%.[55]
Molybdenum can withstand extreme temperatures without significantly expanding or softening, making it useful in environments of intense heat, including military armor, aircraft parts, electrical contacts, industrial motors, and supports for filaments in light bulbs.[13][56]
Most high-strength steel alloys (for example, 41xx steels) contain 0.25% to 8% molybdenum.[6] Even in these small portions, more than 43,000 tonnes of molybdenum are used each year in stainless steels, tool steels, cast irons, and high-temperature superalloys.[46]
Molybdenum is also valued in steel alloys for its high corrosion resistance and weldability.[46][48] Molybdenum contributes corrosion resistance to type-300 stainless steels (specifically type-316) and especially so in the so-called superaustenitic stainless steels (such as alloy AL-6XN, 254SMO and 1925hMo). Molybdenum increases lattice strain, thus increasing the energy required to dissolve iron atoms from the surface.[contradictory] Molybdenum is also used to enhance the corrosion resistance of ferritic (for example grade 444) and martensitic (for example 1.4122 and 1.4418) stainless steels.[citation needed]
Because of its lower density and more stable price, molybdenum is sometimes used in place of tungsten.[46] An example is the 'M' series of high-speed steels such as M2, M4 and M42 as substitution for the 'T' steel series, which contain tungsten. Molybdenum can also be used as a flame-resistant coating for other metals. Although its melting point is 2,623 °C (4,753 °F), molybdenum rapidly oxidizes at temperatures above 760 °C (1,400 °F) making it better-suited for use in vacuum environments.[56]
TZM (Mo (~99%), Ti (~0.5%), Zr (~0.08%) and some C) is a corrosion-resisting molybdenum superalloy that resists molten fluoride salts at temperatures above 1,300 °C (2,370 °F). It has about twice the strength of pure Mo, and is more ductile and more weldable, yet in tests it resisted corrosion of a standard eutectic salt (FLiBe) and salt vapors used in molten salt reactors for 1100 hours with so little corrosion that it was difficult to measure.[57][58]
Other molybdenum-based alloys that do not contain iron have only limited applications. For example, because of its resistance to molten zinc, both pure molybdenum and molybdenum-tungsten alloys (70%/30%) are used for piping, stirrers and pump impellers that come into contact with molten zinc.[59]
Other applications as a pure element
Molybdenum powder is used as a fertilizer for some plants, such as cauliflower[46]
Elemental molybdenum is used in NO, NO2, NOx analyzers in power plants for pollution controls. At 350 °C (662 °F), the element acts as a catalyst for NO2/NOx to form NO molecules for detection by infrared light.[60]
Molybdenum anodes replace tungsten in certain low voltage X-ray sources for specialized uses such as mammography.[61]
The radioactive isotope molybdenum-99 is used to generate technetium-99m, used for medical imaging[62] The isotope is handled and stored as the molybdate.[63]
Molybdenum disulfide (MoS2) is used as a solid lubricant and a high-pressure high-temperature (HPHT) anti-wear agent. It forms strong films on metallic surfaces and is a common additive to HPHT greases — in the event of a catastrophic grease failure, a thin layer of molybdenum prevents contact of the lubricated parts.[64]
When combined with small amounts of cobalt, MoS2 is also used as a catalyst in the hydrodesulfurization (HDS) of petroleum. In the presence of hydrogen, this catalyst facilitates the removal of nitrogen and especiallly sulfur from the feedstock, which otherwise would poison downstream catalysts. HDS is one of the largest scale applications of catalysis in industry.[65]
Molybdenum oxides are important catalysts for selective oxidation of organic compounds. The production of the commodity chemicals acrylonitrile and formaldehyde relies on MoOx-based catalysts.[49]
Molybdenum disilicide (MoSi2) is an electrically conducting ceramic with primary use in heating elements operating at temperatures above 1500 °C in air.[66]
Molybdenum trioxide (MoO3) is used as an adhesive between enamels and metals.[35]
Lead molybdate (wulfenite) co-precipitated with lead chromate and lead sulfate is a bright-orange pigment used with ceramics and plastics.[67]
The molybdenum-based mixed oxides are versatile catalysts in the chemical industry. Some examples are the catalysts for the oxidation of carbon monoxide, propylene to acrolein and acrylic acid, the ammoxidation of propylene to acrylonitrile.[68][69]
Molybdenum carbides, nitride and phosphides can be used for hydrotreatment of rapeseed oil.[70]
Ammonium heptamolybdate is used in biological staining.
Molybdenum coated soda lime glass is used in CIGS (copper indium gallium selenide) solar cells, called CIGS solar cells.
Phosphomolybdic acid is a stain used in thin-layer chromatography.
Main article: Molybdenum in biology
Mo-containing enzymes
Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals).[71]
At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria.[72][73] Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase.[13] With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C.[74]
In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon.[75] In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth.[76]
Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin.[77][78] Nitrogenases catalyze the production of ammonia from atmospheric nitrogen:
N 2 + 8 H + + 8 e − + 16 A T P + 16 H 2 O ⟶ 2 N H 3 + H 2 + 16 A D P + 16 P i {\displaystyle \mathrm {N_{2}+8\ H^{+}+8\ e^{-}+16\ ATP+16\ H_{2}O\longrightarrow 2\ NH_{3}+H_{2}+16\ ADP+16\ P_{i}} }
The biosynthesis of the FeMoco active site is highly complex.[79]
Structure of the FeMoco active site of nitrogenase.
The molybdenum cofactor (pictured) is composed of a molybdenum-free organic complex called molybdopterin, which has bound an oxidized molybdenum(VI) atom through adjacent sulfur (or occasionally selenium) atoms. Except for the ancient nitrogenases, all known Mo-using enzymes use this cofactor.
Molybdate is transported in the body as MoO42−.[76]
Human metabolism and deficiency
Molybdenum is an essential trace dietary element.[80] Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxidase, and mitochondrial amidoxime reductase.[81] People severely deficient in molybdenum have poorly functioning sulfite oxidase and are prone to toxic reactions to sulfites in foods.[82][83] The human body contains about 0.07 mg of molybdenum per kilogram of body weight,[84] with higher concentrations in the liver and kidneys and lower in the vertebrae.[46] Molybdenum is also present within human tooth enamel and may help prevent its decay.[85]
Acute toxicity has not been seen in humans, and the toxicity depends strongly on the chemical state. Studies on rats show a median lethal dose (LD50) as low as 180 mg/kg for some Mo compounds.[86] Although human toxicity data is unavailable, animal studies have shown that chronic ingestion of more than 10 mg/day of molybdenum can cause diarrhea, growth retardation, infertility, low birth weight, and gout; it can also affect the lungs, kidneys, and liver.[87][88] Sodium tungstate is a competitive inhibitor of molybdenum. Dietary tungsten reduces the concentration of molybdenum in tissues.[46]
Low soil concentration of molybdenum in a geographical band from northern China to Iran results in a general dietary molybdenum deficiency and is associated with increased rates of esophageal cancer.[89][90][91] Compared to the United States, which has a greater supply of molybdenum in the soil, people living in those areas have about 16 times greater risk for esophageal squamous cell carcinoma.[92]
Molybdenum deficiency has also been reported as a consequence of non-molybdenum supplemented total parenteral nutrition (complete intravenous feeding) for long periods of time. It results in high blood levels of sulfite and urate, in much the same way as molybdenum cofactor deficiency. Since pure molybdenum deficiency from this cause occurs primarily in adults, the neurological consequences are not as marked as in cases of congenital cofactor deficiency.[93]
A congenital molybdenum cofactor deficiency disease, seen in infants, is an inability to synthesize molybdenum cofactor, the heterocyclic molecule discussed above that binds molybdenum at the active site in all known human enzymes that use molybdenum. The resulting deficiency results in high levels of sulfite and urate, and neurological damage.[94][95]
Most molybdenum is excreted from the human body as molybdate in the urine. Furthermore, urinary excretion of molybdenum increases as dietary molybdenum intake increases. Small amounts of molybdenum are excreted from the body in the feces by way of the bile; small amounts also can be lost in sweat and in hair.[96][97]
Excess and copper antagonism
High levels of molybdenum can interfere with the body's uptake of copper, producing copper deficiency. Molybdenum prevents plasma proteins from binding to copper, and it also increases the amount of copper that is excreted in urine. Ruminants that consume high levels of molybdenum suffer from diarrhea, stunted growth, anemia, and achromotrichia (loss of fur pigment). These symptoms can be alleviated by copper supplements, either dietary and injection.[98] The effective copper deficiency can be aggravated by excess sulfur.[46][99]
Copper reduction or deficiency can also be deliberately induced for therapeutic purposes by the compound ammonium tetrathiomolybdate, in which the bright red anion tetrathiomolybdate is the copper-chelating agent. Tetrathiomolybdate was first used therapeutically in the treatment of copper toxicosis in animals. It was then introduced as a treatment in Wilson's disease, a hereditary copper metabolism disorder in humans; it acts both by competing with copper absorption in the bowel and by increasing excretion. It has also been found to have an inhibitory effect on angiogenesis, potentially by inhibiting the membrane translocation process that is dependent on copper ions.[100] This is a promising avenue for investigation of treatments for cancer, age-related macular degeneration, and other diseases that involve a pathologic proliferation of blood vessels.[101][102]
In some grazing livestock, most strongly in cattle, molybdenum excess in the soil of pasturage can produce scours (diarrhea) if the pH of the soil is neutral to alkaline; see teartness.
Dietary recommendations
In 2000, the then U.S. Institute of Medicine (now the National Academy of Medicine, NAM) updated its Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for molybdenum. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead.
An AI of 2 micrograms (μg) of molybdenum per day was established for infants up to 6 months of age, and 3 μg/day from 7 to 12 months of age, both for males and females. For older children and adults, the following daily RDAs have been established for molybdenum: 17 μg from 1 to 3 years of age, 22 μg from 4 to 8 years, 34 μg from 9 to 13 years, 43 μg from 14 to 18 years, and 45 μg for persons 19 years old and older. All these RDAs are valid for both sexes. Pregnant or lactating females from 14 to 50 years of age have a higher daily RDA of 50 μg of molybdenum.
As for safety, the NAM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of molybdenum, the UL is 2000 μg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs).[103]
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women and men ages 15 and older the AI is set at 65 μg/day. Pregnant and lactating women have the same AI. For children aged 1–14 years, the AIs increase with age from 15 to 45 μg/day. The adult AIs are higher than the U.S. RDAs,[104] but on the other hand, the European Food Safety Authority reviewed the same safety question and set its UL at 600 μg/day, which is much lower than the U.S. value.[105]
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For molybdenum labeling purposes 100% of the Daily Value was 75 μg, but as of May 27, 2016 it was revised to 45 μg.[106][107] A table of the old and new adult daily values is provided at Reference Daily Intake.
Food sources
Average daily intake varies between 120 and 240 μg/day, which is higher than dietary recommendations.[87] Pork, lamb, and beef liver each have approximately 1.5 parts per million of molybdenum. Other significant dietary sources include green beans, eggs, sunflower seeds, wheat flour, lentils, cucumbers, and cereal grain.[13]
Molybdenum dusts and fumes, generated by mining or metalworking, can be toxic, especially if ingested (including dust trapped in the sinuses and later swallowed).[86] Low levels of prolonged exposure can cause irritation to the eyes and skin. Direct inhalation or ingestion of molybdenum and its oxides should be avoided.[108][109] OSHA regulations specify the maximum permissible molybdenum exposure in an 8-hour day as 5 mg/m3. Chronic exposure to 60 to 600 mg/m3 can cause symptoms including fatigue, headaches and joint pains.[110] At levels of 5000 mg/m3, molybdenum is immediately dangerous to life and health.[111]
List of molybdenum mines
Molybdenum mining in the United States
^ "Standard Atomic Weights: Molybdenum". CIAAW. 2013.
^ "Molybdenum: molybdenum(I) fluoride compound data". OpenMOPAC.net. Retrieved 2007-12-10.
^ Lindemann, A.; Blumm, J. (2009). Measurement of the Thermophysical Properties of Pure Molybdenum. Vol. 3. 17th Plansee Seminar.
^ Lide, D. R., ed. (2005). "Magnetic susceptibility of the elements and inorganic compounds". CRC Handbook of Chemistry and Physics (PDF) (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.
^ Weast, Robert (1984). CRC, Handbook of Chemistry and Physics. Boca Raton, Florida: Chemical Rubber Company Publishing. pp. E110. ISBN 0-8493-0464-4.
^ a b c d Lide, David R., ed. (1994). "Molybdenum". CRC Handbook of Chemistry and Physics. Vol. 4. Chemical Rubber Publishing Company. p. 18. ISBN 978-0-8493-0474-3.
^ "It's Elemental - The Element Molybdenum". education.jlab.org. Archived from the original on 2018-07-04. Retrieved 2018-07-03.
^ Bjornsson, Ragnar; Neese, Frank; Schrock, Richard R.; Einsle, Oliver; DeBeer, Serena (2015). "The discovery of Mo(III) in FeMoco: reuniting enzyme and model chemistry". Journal of Biological Inorganic Chemistry. 20 (2): 447–460. doi:10.1007/s00775-014-1230-6. ISSN 0949-8257. PMC 4334110. PMID 25549604.
^ Van Stappen, Casey; Davydov, Roman; Yang, Zhi-Yong; Fan, Ruixi; Guo, Yisong; Bill, Eckhard; Seefeldt, Lance C.; Hoffman, Brian M.; DeBeer, Serena (2019-09-16). "Spectroscopic Description of the E1 State of Mo Nitrogenase Based on Mo and Fe X-ray Absorption and Mössbauer Studies". Inorganic Chemistry. 58 (18): 12365–12376. doi:10.1021/acs.inorgchem.9b01951. ISSN 0020-1669. PMC 6751781. PMID 31441651.
^ Leimkühler, Silke (2020). "The biosynthesis of the molybdenum cofactors in Escherichia coli". Environmental Microbiology. 22 (6): 2007–2026. doi:10.1111/1462-2920.15003. ISSN 1462-2920. PMID 32239579.
^ Wieser, M. E.; Berglund, M. (2009). "Atomic weights of the elements 2007 (IUPAC Technical Report)" (PDF). Pure and Applied Chemistry. 81 (11): 2131–2156. doi:10.1351/PAC-REP-09-08-03. S2CID 98084907. Archived from the original (PDF) on 2012-03-11. Retrieved 2012-02-13.
^ Meija, Juris; et al. (2013). "Current Table of Standard Atomic Weights in Alphabetical Order: Standard Atomic weights of the elements". Commission on Isotopic Abundances and Atomic Weights. Archived from the original on 2014-04-29. {{cite web}}: CS1 maint: bot: original URL status unknown (link)
^ a b c d e f g h Emsley, John (2001). Nature's Building Blocks. Oxford: Oxford University Press. pp. 262–266. ISBN 978-0-19-850341-5.
^ Parish, R. V. (1977). The Metallic Elements. New York: Longman. pp. 112, 133. ISBN 978-0-582-44278-8.
^ Merino, Gabriel; Donald, Kelling J.; D'Acchioli, Jason S.; Hoffmann, Roald (2007). "The Many Ways To Have a Quintuple Bond". J. Am. Chem. Soc. 129 (49): 15295–15302. doi:10.1021/ja075454b. PMID 18004851.
^ Roos, Björn O.; Borin, Antonio C.; Laura Gagliardi (2007). "Reaching the Maximum Multiplicity of the Covalent Chemical Bond". Angew. Chem. Int. Ed. 46 (9): 1469–72. doi:10.1002/anie.200603600. PMID 17225237.
^ Audi, Georges; Bersillon, Olivier; Blachot, Jean; Wapstra, Aaldert Hendrik (2003), "The NUBASE evaluation of nuclear and decay properties", Nuclear Physics A, 729: 3–128, Bibcode:2003NuPhA.729....3A, doi:10.1016/j.nuclphysa.2003.11.001
^ Lide, David R., ed. (2006). CRC Handbook of Chemistry and Physics. Vol. 11. CRC. pp. 87–88. ISBN 978-0-8493-0487-3.
^ Armstrong, John T. (2003). "Technetium". Chemical & Engineering News. Archived from the original on 2008-10-06. Retrieved 2009-07-07.
^ Wolterbeek, Hubert Theodoor; Bode, Peter "A process for the production of no-carrier added 99Mo". European Patent EP2301041 (A1) ― 2011-03-30. Retrieved on 2012-06-27.
^ a b c d e f g h Holleman, Arnold F.; Wiberg, Egon; Wiberg, Nils (1985). Lehrbuch der Anorganischen Chemie (91–100 ed.). Walter de Gruyter. pp. 1096–1104. ISBN 978-3-11-007511-3.
^ Schmidt, Max (1968). "VI. Nebengruppe". Anorganische Chemie II (in German). Wissenschaftsverlag. pp. 119–127.
^ Werner, Helmut (2008-12-16). Landmarks in Organo-Transition Metal Chemistry: A Personal View. Springer Science & Business Media. ISBN 978-0-387-09848-7.
^ Pope, Michael T.; Müller, Achim (1997). "Polyoxometalate Chemistry: An Old Field with New Dimensions in Several Disciplines". Angewandte Chemie International Edition. 30: 34–48. doi:10.1002/anie.199100341.
^ Nollet, Leo M. L., ed. (2000). Handbook of water analysis. New York, NY: Marcel Dekker. pp. 280–288. ISBN 978-0-8247-8433-1.
^ Tamadon, Farhad; Seppelt, Konrad (2013-01-07). "The Elusive Halides VCl 5 , MoCl 6 , and ReCl 6". Angewandte Chemie International Edition. 52 (2): 767–769. doi:10.1002/anie.201207552. PMID 23172658.
^ Walton, Richard A.; Fanwick, Phillip E.; Girolami, Gregory S.; Murillo, Carlos A.; Johnstone, Erik V. (2014). Girolami, Gregory S.; Sattelberger, Alfred P. (eds.). Inorganic Syntheses: Volume 36. John Wiley & Sons, Inc. pp. 78–81. doi:10.1002/9781118744994.ch16. ISBN 9781118744994.
^ Drago, R. S. , Long, J. R., and Cosmano, R. (1982) Comparison of the Coordination Chemistry and inductive Transfer through the Metal-Metal Bond in Adducts of Dirhodium and Dimolybdenum Carboxylates . Inorganic Chemistry 21, 2196-2201.
^ Lansdown, A. R. (1999). Molybdenum disulphide lubrication. Tribology and Interface Engineering. Vol. 35. Elsevier. ISBN 978-0-444-50032-8.
^ Melchert, Craig. "Greek mólybdos as a Loanword from Lydian" (PDF). University of North Carolina at Chapel Hill. Archived (PDF) from the original on 2013-12-31. Retrieved 2011-04-23.
^ International Molybdenum Association, "Molybdenum History"
^ Institute, American Iron and Steel (1948). Accidental use of molybdenum in old sword led to new alloy.
^ Van der Krogt, Peter (2006-01-10). "Molybdenum". Elementymology & Elements Multidict. Archived from the original on 2010-01-23. Retrieved 2007-05-20.
^ a b Gagnon, Steve. "Molybdenum". Jefferson Science Associates, LLC. Archived from the original on 2007-04-26. Retrieved 2007-05-06.
^ Scheele, C. W. K. (1779). "Versuche mit Wasserbley;Molybdaena". Svenska Vetensk. Academ. Handlingar. 40: 238.
^ Hjelm, P. J. (1788). "Versuche mit Molybdäna, und Reduction der selben Erde". Svenska Vetensk. Academ. Handlingar. 49: 268.
^ Hoyt, Samuel Leslie (1921). Metallography. Vol. 2. McGraw-Hill.
^ Krupp, Alfred; Wildberger, Andreas (1888). The metallic alloys: A practical guide for the manufacture of all kinds of alloys, amalgams, and solders, used by metal-workers ... with an appendix on the coloring of alloys. H.C. Baird & Co. p. 60.
^ Gupta, C. K. (1992). Extractive Metallurgy of Molybdenum. CRC Press. ISBN 978-0-8493-4758-0.
^ Reich, Leonard S. (2002-08-22). The Making of American Industrial Research: Science and Business at Ge and Bell, 1876–1926. p. 117. ISBN 9780521522373. Archived from the original on 2014-07-09. Retrieved 2016-04-07.
^ Vokes, Frank Marcus (1963). Molybdenum deposits of Canada. p. 3.
^ Chemical properties of molibdenum - Health effects of molybdenum - Environmental effects of molybdenum Archived 2016-01-20 at the Wayback Machine. lenntech.com
^ Sam Kean. The Disappearing Spoon. Page 88–89
^ Millholland, Ray (August 1941). "Battle of the Billions: American industry mobilizes machines, materials, and men for a job as big as digging 40 Panama Canals in one year". Popular Science: 61. Archived from the original on 2014-07-09. Retrieved 2016-04-07.
^ a b c d e f g h Considine, Glenn D., ed. (2005). "Molybdenum". Van Nostrand's Encyclopedia of Chemistry. New York: Wiley-Interscience. pp. 1038–1040. ISBN 978-0-471-61525-5.
^ Jambor, J.L.; et al. (2002). "New mineral names" (PDF). American Mineralogist. 87: 181. Archived (PDF) from the original on 2007-07-10. Retrieved 2007-04-09.
^ a b "Molybdenum Statistics and Information". U.S. Geological Survey. 2007-05-10. Archived from the original on 2007-05-19. Retrieved 2007-05-10.
^ a b Sebenik, Roger F.; Burkin, A. Richard; Dorfler, Robert R.; Laferty, John M.; Leichtfried, Gerhard; Meyer-Grünow, Hartmut; Mitchell, Philip C. H.; Vukasovich, Mark S.; Church, Douglas A.; Van Riper, Gary G.; Gilliland, James C.; Thielke, Stanley A. (2000). "Molybdenum and Molybdenum Compounds". Ullmann's Encyclopedia of Industrial Chemistry. doi:10.1002/14356007.a16_655. ISBN 3527306730. S2CID 98762721.
^ Gupta, C. K. (1992). Extractive Metallurgy of Molybdenum. CRC Press. pp. 1–2. ISBN 978-0-8493-4758-0.
^ "Dynamic Prices and Charts for Molybdenum". InfoMine Inc. 2007. Archived from the original on 2009-10-08. Retrieved 2007-05-07.
^ "LME to launch minor metals contracts in H2 2009". London Metal Exchange. 2008-09-04. Archived from the original on 2012-07-22. Retrieved 2009-07-28. {{cite web}}: CS1 maint: bot: original URL status unknown (link)
^ Langedal, M. (1997). "Dispersion of tailings in the Knabena—Kvina drainage basin, Norway, 1: Evaluation of overbank sediments as sampling medium for regional geochemical mapping". Journal of Geochemical Exploration. 58 (2–3): 157–172. doi:10.1016/S0375-6742(96)00069-6.
^ Coffman, Paul B. (1937). "The Rise of a New Metal: The Growth and Success of the Climax Molybdenum Company". The Journal of Business of the University of Chicago. 10: 30. doi:10.1086/232443.
^ Pie chart of world Mo uses. London Metal Exchange.
^ a b "Molybdenum". AZoM.com Pty. Limited. 2007. Archived from the original on 2011-06-14. Retrieved 2007-05-06.
^ Smallwood, Robert E. (1984). "TZM Moly Alloy". ASTM special technical publication 849: Refractory metals and their industrial applications: a symposium. ASTM International. p. 9. ISBN 9780803102033.
^ "Compatibility of Molybdenum-Base Alloy TZM, with LiF-BeF2-ThF4-UF4". Oak Ridge National Laboratory Report. December 1969. Archived from the original on 2011-07-10. Retrieved 2010-09-02.
^ Cubberly, W. H.; Bakerjian, Ramon (1989). Tool and manufacturing engineers handbook. Society of Manufacturing Engineers. p. 421. ISBN 978-0-87263-351-3.
^ Lal, S.; Patil, R. S. (2001). "Monitoring of atmospheric behaviour of NOx from vehicular traffic". Environmental Monitoring and Assessment. 68 (1): 37–50. doi:10.1023/A:1010730821844. PMID 11336410. S2CID 20441999.
^ Lancaster, Jack L. "Ch. 4: Physical determinants of contrast" (PDF). Physics of Medical X-Ray Imaging. University of Texas Health Science Center. Archived from the original (PDF) on 2015-10-10.
^ Gray, Theodore (2009). The Elements. Black Dog & Leventhal. pp. 105–107. ISBN 1-57912-814-9.
^ Gottschalk, A. (1969). "Technetium-99m in clinical nuclear medicine". Annual Review of Medicine. 20 (1): 131–40. doi:10.1146/annurev.me.20.020169.001023. PMID 4894500.
^ Winer, W. (1967). "Molybdenum disulfide as a lubricant: A review of the fundamental knowledge" (PDF). Wear. 10 (6): 422–452. doi:10.1016/0043-1648(67)90187-1. hdl:2027.42/33266.
^ Topsøe, H.; Clausen, B. S.; Massoth, F. E. (1996). Hydrotreating Catalysis, Science and Technology. Berlin: Springer-Verlag.
^ Moulson, A. J.; Herbert, J. M. (2003). Electroceramics: materials, properties, applications. John Wiley and Sons. p. 141. ISBN 978-0-471-49748-6.
^ International Molybdenum Association Archived 2008-03-09 at the Wayback Machine. imoa.info.
^ Fierro, J. G. L., ed. (2006). Metal Oxides, Chemistry and Applications. CRC Press. pp. 414–455.
^ Centi, G.; Cavani, F.; Trifiro, F. (2001). Selective Oxidation by Heterogeneous Catalysis. Kluwer Academic/Plenum Publishers. pp. 363–384.
^ Horáček, Jan; Akhmetzyanova, Uliana; Skuhrovcová, Lenka; Tišler, Zdeněk; de Paz Carmona, Héctor (1 April 2020). "Alumina-supported MoNx, MoCx and MoPx catalysts for the hydrotreatment of rapeseed oil". Applied Catalysis B: Environmental. 263: 118328. doi:10.1016/j.apcatb.2019.118328. ISSN 0926-3373. S2CID 208758175.
^ Scott, C.; Lyons, T. W.; Bekker, A.; Shen, Y.; Poulton, S. W.; Chu, X.; Anbar, A. D. (2008). "Tracing the stepwise oxygenation of the Proterozoic ocean". Nature. 452 (7186): 456–460. Bibcode:2008Natur.452..456S. doi:10.1038/nature06811. PMID 18368114. S2CID 205212619.
^ Enemark, John H.; Cooney, J. Jon A.; Wang, Jun-Jieh; Holm, R. H. (2004). "Synthetic Analogues and Reaction Systems Relevant to the Molybdenum and Tungsten Oxotransferases". Chem. Rev. 104 (2): 1175–1200. doi:10.1021/cr020609d. PMID 14871153.
^ Mendel, Ralf R.; Bittner, Florian (2006). "Cell biology of molybdenum". Biochimica et Biophysica Acta (BBA) - Molecular Cell Research. 1763 (7): 621–635. doi:10.1016/j.bbamcr.2006.03.013. PMID 16784786.
^ Russ Hille; James Hall; Partha Basu (2014). "The Mononuclear Molybdenum Enzymes". Chem. Rev. 114 (7): 3963–4038. doi:10.1021/cr400443z. PMC 4080432. PMID 24467397.
^ Kisker, C.; Schindelin, H.; Baas, D.; Rétey, J.; Meckenstock, R. U.; Kroneck, P. M. H. (1999). "A structural comparison of molybdenum cofactor-containing enzymes" (PDF). FEMS Microbiol. Rev. 22 (5): 503–521. doi:10.1111/j.1574-6976.1998.tb00384.x. PMID 9990727. Archived (PDF) from the original on 2017-08-10. Retrieved 2017-10-25.
^ a b Mitchell, Phillip C. H. (2003). "Overview of Environment Database". International Molybdenum Association. Archived from the original on 2007-10-18. Retrieved 2007-05-05.
^ Mendel, Ralf R. (2013). "Chapter 15 Metabolism of Molybdenum". In Banci, Lucia (ed.). Metallomics and the Cell. Metal Ions in Life Sciences. Vol. 12. Springer. doi:10.1007/978-94-007-5561-10_15 (inactive 31 December 2022). ISBN 978-94-007-5560-4. {{cite book}}: CS1 maint: DOI inactive as of December 2022 (link) electronic-book ISBN 978-94-007-5561-1 ISSN 1559-0836 electronic-ISSN 1868-0402
^ Chi Chung, Lee; Markus W., Ribbe; Yilin, Hu (2014). "Chapter 7. Cleaving the N,N Triple Bond: The Transformation of Dinitrogen to Ammonia by Nitrogenases". In Peter M.H. Kroneck; Martha E. Sosa Torres (eds.). The Metal-Driven Biogeochemistry of Gaseous Compounds in the Environment. Metal Ions in Life Sciences. Vol. 14. Springer. pp. 147–174. doi:10.1007/978-94-017-9269-1_6. ISBN 978-94-017-9268-4. PMID 25416393.
^ Dos Santos, Patricia C.; Dean, Dennis R. (2008). "A newly discovered role for iron-sulfur clusters". PNAS. 105 (33): 11589–11590. Bibcode:2008PNAS..10511589D. doi:10.1073/pnas.0805713105. PMC 2575256. PMID 18697949.
^ Schwarz, Guenter; Belaidi, Abdel A. (2013). "Chapter 13. Molybdenum in Human Health and Disease". In Astrid Sigel; Helmut Sigel; Roland K. O. Sigel (eds.). Interrelations between Essential Metal Ions and Human Diseases. Metal Ions in Life Sciences. Vol. 13. Springer. pp. 415–450. doi:10.1007/978-94-007-7500-8_13. ISBN 978-94-007-7499-5. PMID 24470099.
^ Mendel, Ralf R. (2009). "Cell biology of molybdenum". BioFactors. 35 (5): 429–34. doi:10.1002/biof.55. PMID 19623604. S2CID 205487570.
^ Blaylock Wellness Report, February 2010, page 3.
^ Cohen, H. J.; Drew, R. T.; Johnson, J. L.; Rajagopalan, K. V. (1973). "Molecular Basis of the Biological Function of Molybdenum. The Relationship between Sulfite Oxidase and the Acute Toxicity of Bisulfite and SO2". Proceedings of the National Academy of Sciences of the United States of America. 70 (12 Pt 1–2): 3655–3659. Bibcode:1973PNAS...70.3655C. doi:10.1073/pnas.70.12.3655. PMC 427300. PMID 4519654.
^ Holleman, Arnold F.; Wiberg, Egon (2001). Inorganic chemistry. Academic Press. p. 1384. ISBN 978-0-12-352651-9.
^ Curzon, M. E. J.; Kubota, J.; Bibby, B. G. (1971). "Environmental Effects of Molybdenum on Caries". Journal of Dental Research. 50 (1): 74–77. doi:10.1177/00220345710500013401. S2CID 72386871.
^ a b "Risk Assessment Information System: Toxicity Summary for Molybdenum". Oak Ridge National Laboratory. Archived from the original on September 19, 2007. Retrieved 2008-04-23.
^ a b Coughlan, M. P. (1983). "The role of molybdenum in human biology". Journal of Inherited Metabolic Disease. 6 (S1): 70–77. doi:10.1007/BF01811327. PMID 6312191. S2CID 10114173.
^ Barceloux, Donald G.; Barceloux, Donald (1999). "Molybdenum". Clinical Toxicology. 37 (2): 231–237. doi:10.1081/CLT-100102422. PMID 10382558.
^ Yang, Chung S. (1980). "Research on Esophageal Cancer in China: a Review" (PDF). Cancer Research. 40 (8 Pt 1): 2633–44. PMID 6992989. Archived (PDF) from the original on 2015-11-23. Retrieved 2011-12-30.
^ Nouri, Mohsen; Chalian, Hamid; Bahman, Atiyeh; Mollahajian, Hamid; et al. (2008). "Nail Molybdenum and Zinc Contents in Populations with Low and Moderate Incidence of Esophageal Cancer" (PDF). Archives of Iranian Medicine. 11 (4): 392–6. PMID 18588371. Archived from the original (PDF) on 2011-07-19. Retrieved 2009-03-23.
^ Zheng, Liu; et al. (1982). "Geographical distribution of trace elements-deficient soils in China". Acta Ped. Sin. 19: 209–223.
^ Taylor, Philip R.; Li, Bing; Dawsey, Sanford M.; Li, Jun-Yao; Yang, Chung S.; Guo, Wande; Blot, William J. (1994). "Prevention of Esophageal Cancer: The Nutrition Intervention Trials in Linxian, China" (PDF). Cancer Research. 54 (7 Suppl): 2029s–2031s. PMID 8137333. Archived (PDF) from the original on 2016-09-17. Retrieved 2016-07-01.
^ Abumrad, N. N. (1984). "Molybdenum—is it an essential trace metal?". Bulletin of the New York Academy of Medicine. 60 (2): 163–71. PMC 1911702. PMID 6426561.
^ Smolinsky, B; Eichler, S. A.; Buchmeier, S.; Meier, J. C.; Schwarz, G. (2008). "Splice-specific Functions of Gephyrin in Molybdenum Cofactor Biosynthesis". Journal of Biological Chemistry. 283 (25): 17370–9. doi:10.1074/jbc.M800985200. PMID 18411266.
^ Reiss, J. (2000). "Genetics of molybdenum cofactor deficiency". Human Genetics. 106 (2): 157–63. doi:10.1007/s004390051023. PMID 10746556.
^ Gropper, Sareen S.; Smith, Jack L.; Carr, Timothy P. (2016-10-05). Advanced Nutrition and Human Metabolism. Cengage Learning. ISBN 978-1-337-51421-7.
^ Turnlund, J. R.; Keyes, W. R.; Peiffer, G. L. (October 1995). "Molybdenum absorption, excretion, and retention studied with stable isotopes in young men at five intakes of dietary molybdenum". The American Journal of Clinical Nutrition. 62 (4): 790–796. doi:10.1093/ajcn/62.4.790. ISSN 0002-9165. PMID 7572711.
^ Suttle, N. F. (1974). "Recent studies of the copper-molybdenum antagonism". Proceedings of the Nutrition Society. 33 (3): 299–305. doi:10.1079/PNS19740053. PMID 4617883.
^ Hauer, Gerald Copper deficiency in cattle Archived 2011-09-10 at the Wayback Machine. Bison Producers of Alberta. Accessed Dec. 16, 2010.
^ Nickel, W (2003). "The Mystery of nonclassical protein secretion, a current view on cargo proteins and potential export routes". Eur. J. Biochem. 270 (10): 2109–2119. doi:10.1046/j.1432-1033.2003.03577.x. PMID 12752430.
^ Brewer GJ; Hedera, P.; Kluin, K. J.; Carlson, M.; Askari, F.; Dick, R. B.; Sitterly, J.; Fink, J. K. (2003). "Treatment of Wilson disease with ammonium tetrathiomolybdate: III. Initial therapy in a total of 55 neurologically affected patients and follow-up with zinc therapy". Arch Neurol. 60 (3): 379–85. doi:10.1001/archneur.60.3.379. PMID 12633149.
^ Brewer, G. J.; Dick, R. D.; Grover, D. K.; Leclaire, V.; Tseng, M.; Wicha, M.; Pienta, K.; Redman, B. G.; Jahan, T.; Sondak, V. K.; Strawderman, M.; LeCarpentier, G.; Merajver, S. D. (2000). "Treatment of metastatic cancer with tetrathiomolybdate, an anticopper, antiangiogenic agent: Phase I study". Clinical Cancer Research. 6 (1): 1–10. PMID 10656425.
^ Institute of Medicine (2000). "Molybdenum". Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc. Washington, DC: The National Academies Press. pp. 420–441. doi:10.17226/10026. ISBN 978-0-309-07279-3. PMID 25057538. S2CID 44243659.
^ "Overview on Dietary Reference Values for the EU population as derived by the EFSA Panel on Dietetic Products, Nutrition and Allergies" (PDF). 2017. Archived from the original (PDF) on 2017-08-28. Retrieved 2017-09-10.
^ Tolerable Upper Intake Levels For Vitamins And Minerals (PDF), European Food Safety Authority, 2006, archived from the original (PDF) on 2016-03-16, retrieved 2017-09-10
^ "Federal Register May 27, 2016 Food Labeling: Revision of the Nutrition and Supplement Facts Labels. FR page 33982" (PDF). Archived (PDF) from the original on August 8, 2016. Retrieved September 10, 2017.
^ "Daily Value Reference of the Dietary Supplement Label Database (DSLD)". Dietary Supplement Label Database (DSLD). Archived from the original on 7 April 2020. Retrieved 16 May 2020.
^ "Material Safety Data Sheet – Molybdenum". The REMBAR Company, Inc. 2000-09-19. Archived from the original on March 23, 2007. Retrieved 2007-05-13.
^ "Material Safety Data Sheet – Molybdenum Powder". CERAC, Inc. 1994-02-23. Archived from the original on 2011-07-08. Retrieved 2007-10-19.
^ "NIOSH Documentation for IDLHs Molybdenum". National Institute for Occupational Safety and Health. 1996-08-16. Archived from the original on 2007-08-07. Retrieved 2007-05-31.
^ "CDC – NIOSH Pocket Guide to Chemical Hazards – Molybdenum". www.cdc.gov. Archived from the original on 2015-11-20. Retrieved 2015-11-20.
Lettera di Giulio Candida al signor Vincenzo Petagna - Sulla formazione del molibdeno. Naples: Giuseppe Maria Porcelli. 1785.
Wikimedia Commons has media related to Molybdenum.
Look up molybdenum in Wiktionary, the free dictionary.
Molybdenum at The Periodic Table of Videos (University of Nottingham)
Mineral & Exploration – Map of World Molybdenum Producers 2009
"Mining A Mountain" Popular Mechanics, July 1935 pp. 63–64
Site for global molybdenum info
CDC – NIOSH Pocket Guide to Chemical Hazards
Periodic table
s-block f-block d-block p-block
Molybdenum compounds
Mo(0)
Mo(CO)6
Mo(II)
MoBr2
MoCl2
MoSi2
Mo(III)
MoI3
Mo<sub>2</sub>O<sub>3</sub>
Mo(IV)
MoF4
MoO2
MoS2
MoSe2
MoTe2
Mo(V)
Mo(VI)
Molybdates
MoOCl4
Authority control: National libraries
United States
Czech Republic
This page was last edited on 22 January 2023, at 04:02 | CommonCrawl |
Sihem Guerarra ,
Faculty of Exact Sciences and Sciences of Nature and Life, Department of Mathematics and informatics, University of Oum El Bouaghi, 04000, Algeria
* Corresponding author: Sihem Guerarra
Received June 2019 Revised September 2019 Published February 2020
In this paper we derive the extremal ranks and inertias of the matrix $ X+X^{\ast}-P $, with respect to $ X $, where $ P\in\mathbb{C} _{H}^{n\times n} $ is given, $ X $ is a least rank solution to the matrix equation $ AXB = C $, and then give necessary and sufficient conditions for $ X+X^{\ast}\succ P $ $ \left( \geq P\text{, }\prec P\text{, }\leq P\right) $ in the Löwner partial ordering. As consequence, we establish necessary and sufficient conditions for the matrix equation $ AXB = C $ to have a Hermitian Re-positive or Re-negative definite solution.
Keywords: Matrix equation, Moore-Penrose generalized inverse, Rank, Inertias, Least-rank solution.
Mathematics Subject Classification: Primary: 15A24; Secondary: 15A09, 15A03, 15B57.
Citation: Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016
A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications, 2$^{\rm nd}$ ed., Springer, 2003. Google Scholar
S. L. Cambell and C. D. Meyer, Generalized Inverse of Linear Transformations, SIAM, 2008. doi: 10.1137/1.9780898719048.ch0. Google Scholar
J. Groβ, Nonnegative-definite and positive definite solutions to the matrix equation $AXA^{\ast} = B$-revisited, Linear Algebra Appl., 321 (2000), 123-129. doi: 10.1016/S0024-3795(00)00033-1. Google Scholar
S. Guerarra and S. Guedjiba, Common least-rank solution of matrix equations $A_{1}X_{1}B_{1} = C_{1}$ and $A_{2}X_{2} B_{2} = C_{2}$ with applications, Facta Universitatis (Niš). Ser. Math. Inform., 29 (2014), 313–323. Google Scholar
S. Guerarra and S. Guedjiba, Common Hermitian least-rank solution of matrix equations $A_{1}XA_{1}^{\ast} = B_{1}$ and $A_{2}XA_{2}^{\ast} = B_{2}$ subject to inequality restrictions, Facta Universitatis (Niš). Ser. Math. Inform., 30 (2015), 539–554. Google Scholar
S. Guerarra, Positive and negative definite submatrices in an Hermitian least rank solution of the matrix equation, Numer. Algebra, Contr. & Optim., 9 (2019), 15-22. Google Scholar
C. G. Khatri and S. K. Mitra, Hermitian and nonnegative definite solutions of linear matrix equations, SIAM J. Appl. Math., 31 (1976), 579-585. doi: 10.1137/0131050. Google Scholar
Y. Liu, Ranks of least squares solutions of the matrix equation $AXB = C$, Comput. Mathe. Applications, 55 (2008), 1270-1278. doi: 10.1016/j.camwa.2007.06.023. Google Scholar
R. Penrose, A generalized inverse for matrices, Proc. Camb. Phil. Soc., 52 (1955), 406-413. Google Scholar
P. S. Stanimirović, G-inverses and canonical forms, Facta Universitatis (Niš). Ser. Math. Inform., 15 (2000), 1–14. Google Scholar
Y. Tian, Rank Equalities Related to Generalized Inverses of Matrices and Their Applications, Master Thesis, Montreal, Quebec, Canada, 2000. Google Scholar
Y. Tian, The maximal and minimal ranks of some expressions of generalized inverses of matrices, Southeast Asian Bull. Math., 25 (2002), 745-755. doi: 10.1007/s100120200015. Google Scholar
Y. Tian and S. Cheng, The maximal and minimal ranks of $A-BXC$ with applications, New York J. Math., 9 (2003), 345-362. Google Scholar
Y. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl., 433 (2010), 263-296. doi: 10.1016/j.laa.2010.02.018. Google Scholar
Y. Tian, Maximization and minimization of the rank and inertias of the Hermitian matrix expression $A-BX-\left(BX\right) ^{\ast}$ with applications, Linear Algebra Appl., 434 (2011), 2109-2139. doi: 10.1016/j.laa.2010.12.010. Google Scholar
Y. Tian and H. Wang, Relations between least squares and least rank solution of the matrix equations $AXB=C$, Appl. Math. Comput., 219 (2013), 10293-10301. doi: 10.1016/j.amc.2013.03.137. Google Scholar
X. Zhang, Hermitian nonnegative-definite and positive-definite solutions of the matrix equation $AXB=C$, Appl. Math. E-Notes, 4 (2004), 40-47. Google Scholar
Mingchao Zhao, You-Wei Wen, Michael Ng, Hongwei Li. A nonlocal low rank model for poisson noise removal. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021003
Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021001
Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326
Russell Ricks. The unique measure of maximal entropy for a compact rank one locally CAT(0) space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 507-523. doi: 10.3934/dcds.2020266
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367
Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012
Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345
Qianqian Han, Xiao-Song Yang. Qualitative analysis of a generalized Nosé-Hoover oscillator. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020346
Sihem Guerarra | CommonCrawl |
An integrated dynamic facility layout and job shop scheduling problem: A hybrid NSGA-II and local search algorithm
An integrated bi-objective optimization model and improved genetic algorithm for vehicle routing problems with temporal and spatial constraints
The (functional) law of the iterated logarithm of the sojourn time for a multiclass queue
Yongjiang Guo a,, and Yuantao Song b,
School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
School of Engineering Science, University of the Chinese Academy of Sciences, Beijing, 100049, China
Received September 2017 Revised January 2018 Published December 2018
Two types of the law of iterated logarithm (LIL) and one functional LIL (FLIL) are established for the sojourn time process for a multiclass queueing model, having a priority service discipline, one server and $K$ customer classes, with each class characterized by a batch renewal arrival process and independent and identically distributed (i.i.d.) service times. The LIL and FLIL limits quantify the magnitude of asymptotic stochastic fluctuations of the sojourn time process compensated by its deterministic fluid limits in two forms: the numerical and functional. The LIL and FLIL limits are established in three cases: underloaded, critically loaded and overloaded, defined by the traffic intensity. We prove the results by a approach based on strong approximation, which approximates discrete performance processes with reflected Brownian motions. We conduct numerical examples to provide insights on these LIL results.
Keywords: Multiclass queue, the law of iterated logarithm, functional law of iterated logarithm, sojourn time process, strong approximation.
Mathematics Subject Classification: Primary: 60K25, 90B36; Secondary: 90B22, 68M20.
Citation: Yongjiang Guo, Yuantao Song. The (functional) law of the iterated logarithm of the sojourn time for a multiclass queue. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018192
M. Bramson and J. G. Dai, Heavy traffic limits for some queueing networks, Annals of Applied Probability, 11 (2001), 49-90. doi: 10.1214/aoap/998926987. Google Scholar
L. Caramellino, Strassen's law of the iterated logarithm for diffusion processes for small time, Stochastic Processes and their Applications, 74 (1998), 1-19. doi: 10.1016/S0304-4149(97)00100-2. Google Scholar
H. Chen and A. Mandelbaum, Hierarchical modeling of stochastic network, part Ⅱ: Strong approximations, Stochastic Modeling and Analysis of Manufacturing Systems, Yao, D. D. (Eds), (1994), 107-131. Google Scholar
H. Chen and X. Shen, Strong approximations for multiclass feedforward queueing networks, Annals of Applied Probability, 10 (2000), 828-876. doi: 10.1214/aoap/1019487511. Google Scholar
H. Chen and D. D. Yao, Fundamentals of Queueing Networks, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4757-5301-1. Google Scholar
H. Chen and H. Q. Zhang, A sufficient condition and a necessary condition for the diffusion approximations of multiclass queueing networks under priority service disciplines, Queueing Systems, 34 (2000), 237-268. doi: 10.1023/A:1019113204634. Google Scholar
M. Csörgő ang P. Révész, Strong Approximations in Probability and Statistics, Academic Press, New York, 1981. Google Scholar
M. Csörgő, P. Deheuvels and L. Horváth, An approximation of stopped sums with applications in queueing theory, Advances in Applied Probability, 19 (1987), 674-690. doi: 10.2307/1427412. Google Scholar
M. Csörgő, Z. S. Hu and H. W. Mei, Strassen-type law of the iterated logarithm for self-normalized increments of sums, Journal of Mathematical Analysis and Applications, 393 (2012), 45-55. doi: 10.1016/j.jmaa.2012.03.047. Google Scholar
C. Cuny, F. Merlevéde and M. Peligrad, Law of the iterated logarithm for the periodogram, Stochastic Processes and their Applications, 123 (2013), 4065-4089. doi: 10.1016/j.spa.2013.05.009. Google Scholar
J. G. Dai, On the positive Harris recurrence for multiclass queueing networks: a unified approach via fluid limit models, Annals of Applied Probability, 5 (1995), 49-77. doi: 10.1214/aoap/1177004828. Google Scholar
S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, Wiley, New York, 1986. doi: 10.1002/9780470316658. Google Scholar
P. W. Glynn and W. Whitt, A new view of the heavy-traffic limit for infinite-server queues, Advances in Applied Probability, 23 (1991), 188-209. doi: 10.2307/1427517. Google Scholar
P. W. Glynn and W. Whitt, Departures from many queues in series, Annals of Applied Probability, 1 (1991), 546-572. doi: 10.1214/aoap/1177005838. Google Scholar
P. W. Glynn and W. Whitt, A central-limit-theorem version of L=λW, Queueing Systems, 1 (1986), 191-215. doi: 10.1007/BF01536188. Google Scholar
P. W. Glynn and W. Whitt, Sufficient conditions for functional limit theorem versions of L=λW, Queueing Systems, 1 (1987), 279-287. doi: 10.1007/BF01149539. Google Scholar
P. W. Glynn and W. Whitt, An LIL version of L=λW, Mathematics of Operations Research, 13 (1988), 693-710. doi: 10.1287/moor.13.4.693. Google Scholar
Y. Guo and Z. Li, Asymptotic variability analysis for a two-stage tandem queue, part Ⅰ: The functional law of the iterated logarithm, J. Math. Anal. Appl., 450 (2017), 1479-1509. doi: 10.1016/j.jmaa.2017.01.062. Google Scholar
Y. Guo and Z. Li, Asymptotic variability analysis for a two-stage tandem queue, part Ⅱ: The law of the iterated logarithm, J. Math. Anal. Appl., 450 (2017), 1510-1534. doi: 10.1016/j.jmaa.2016.10.054. Google Scholar
Y. Guo and Y. Liu, A law of iterated logarithm for multiclass queues with preemptive priority service discipline, Queueing Systems, 79 (2015), 251-291. doi: 10.1007/s11134-014-9419-5. Google Scholar
Y. Guo, Y. Liu and R. Pei, Functional law of iterated logarithm for multi-server queues with batch arrivals and customer feedback, Annals of Operations Research, 264 (2018), 157-191. doi: 10.1007/s10479-017-2529-9. Google Scholar
J. M. Harrison, Brownian Motion and Stochastic Flow System, Wiley, New York, 1985. Google Scholar
L. Horváth, Strong approximation of renewal processes, Stochastic Process. Appl., 18 (1984), 127-138. doi: 10.1016/0304-4149(84)90166-2. Google Scholar
L. Horváth, Strong approximation of extended renewal processes, The Annals of Probability, 12 (1984), 1149-1166. doi: 10.1214/aop/1176993145. Google Scholar
L. Horváth, Strong approximations of open queueing networks, Mathematics of Operations Research, 17 (1992), 487-508. doi: 10.1287/moor.17.2.487. Google Scholar
G. L. Iglehart, Multiple channel queues in heavy traffic: IV. Law of the iterated logarithm, Z.Wahrscheinlichkeitstheorie verw. Geb., 17 (1971), 168-180. doi: 10.1007/BF00538869. Google Scholar
P. Lévy, Théorie de L'addition des Variables Aléatories, Gauthier-Villars, Paris, 1937. Google Scholar
P. Lévy, Procesus Stochastique et Mouvement Brownien, Gauthier-Villars, Paris, 1948. Google Scholar
E. Löcherbach and D. Loukianova, The law of iterated logarithm for additive functionals and martingale additive functionals of Harris recurrent Markov processes, Stochastic Processes and their Applications, 119 (2009), 2312-2335. doi: 10.1016/j.spa.2008.11.006. Google Scholar
A. Mandelbaum and W. A. Massey, Strong approximations for time-dependent queues, Mathematics of Operations Research, 20 (1995), 33-64. doi: 10.1287/moor.20.1.33. Google Scholar
A. Mandelbaum, W. A. Massey and M. Reiman, Strong approximations for Markovian service networks, Queueing Systems, 30 (1998), 149-201. doi: 10.1023/A:1019112920622. Google Scholar
S. Minkevi$\check{c}$ius and S. Stei$\check{s}\bar{u}$nas, A law of the iterated logarithm for global values of waiting time in multiphase queues, Statistics and Probability Letters, 61 (2003), 359-371. doi: 10.1016/S0167-7152(02)00393-0. Google Scholar
S. Minkevi$\check{c}$ius, On the law of the iterated logarithm in multiserver open queueing networks, Stochastics, 86 (2014), 46-59. doi: 10.1080/17442508.2012.755625. Google Scholar
S. Minkevi$\check{c}$ius, V. Dolgopolovas and L. L. Sakalauskas, A law of the iterated logarithm for the sojourn time process in queues in series, Methodology and Computing in Applied Probability, 18 (2016), 37-57. doi: 10.1007/s11009-014-9402-y. Google Scholar
K. Miyabea and A. Takemura, The law of the iterated logarithm in game-theoretic probability with quadratic and stronger hedges, Stochastic Processes and their Applications, 123 (2013), 3132-3152. doi: 10.1016/j.spa.2013.03.018. Google Scholar
W. P. Peterson, A heavy traffic limit theorem for networks of queues with multiple customer types, Mathematics of Operations Research, 16 (1991), 90-118. doi: 10.1287/moor.16.1.90. Google Scholar
L. L. Sakalauskas and S. Minkevi$\check{c}$ius, On the law of the iterated logarithm in open queueing networks, European Journal of Operational Research, 120 (2000), 632-640. doi: 10.1016/S0377-2217(99)00003-X. Google Scholar
V. Strassen, An invariance principle for the law of the iterated logarithm, Z. Wahrscheinlichkeitstheorie Verw. Geb., 3 (1964), 211-226. doi: 10.1007/BF00534910. Google Scholar
T. H. Tsai, Empirical law of the iterated logarithm for Markov chains with a countable state space, Stochastic Processes and their Applications, 89 (2000), 175-191. doi: 10.1016/S0304-4149(00)00019-3. Google Scholar
Y. Wang, The law of the iterated logarithm for p-random sequences. In: Proc. 11th IEEE Conference on Computational Complexity (CCC), (1996), 180-189. Google Scholar
W. Whitt, Weak convergence theorems for priority queues: Preemptive-Resume discipline, Journal of Applied Probability, 8 (1971), 74-94. doi: 10.2307/3211839. Google Scholar
H. Q. Zhang and G. X. Hsu, Strong approximations for priority queues: Head-of-the-line-first discipline, Queueing Systems, 10 (1992), 213-233. doi: 10.1007/BF01159207. Google Scholar
H. Q. Zhang, G. X. Hsu and R. X. Wang, Strong approximations for multiple channels in heavy traffic, Journal of Applied Probability, 27 (1990), 658-670. doi: 10.2307/3214549. Google Scholar
H. Q. Zhang, Strong approximations of irreducible closed queueing networks, Advances in Applied Probability, 29 (1997), 498-522. doi: 10.2307/1428014. Google Scholar
Figure 1. The LIL limits in Example 3
Table 1. The LIL and FLIL limits for (Ⅰ) in Example 1
$k$ 1 2 3 4 5 6
$Z^*_k=Z^*_{sup, k}$ $0$ $0$ $0$ $\sqrt{3}$ $\sqrt{3.9}$ $\sqrt{4.8}$
$Z^*_{inf, k}$ $0$ $0$ $0$ $0$ $-\sqrt{3.9}$ $-\sqrt{4.8}$
$\mathcal{K}_{Z_{k}}$ $\{0\}$ $\{0\}$ $\{0\}$ $\Phi(\mathcal{G} (\sqrt{3}))$ $\mathcal{G} (\sqrt{3.9})$ $\mathcal{G} (\sqrt{4.8})$
$\mathcal{S}^*_{k}=\mathcal{S}^*_{sup, k}$ $0$ $0$ $0$ $10\sqrt{3}$
$\mathcal{S}^*_{inf, k}$ $0$ $0$ $0$ $0$
$\mathcal{K}_{\mathcal{S}_{k}}$ $\{0\}$ $\{0\}$ $\{0\}$ $\Phi(\mathcal{G} (10\sqrt{3}))$
Table 2. The LIL and FLIL limits for (Ⅱ) in Example 1
$Z^*_k=Z^*_{sup, k}$ $0$ $0$ $0$ $\sqrt{3.6}$ $\sqrt{4.5}$ $\sqrt{5.4}$
$Z^*_{inf, k}$ $0$ $0$ $0$ $-\sqrt{3.6}$ $-\sqrt{4.5}$ $-\sqrt{5.4}$
$\mathcal{K}_{Z_{k}}$ $\{0\}$ $\{0\}$ $\{0\}$ $\mathcal{G} (3.6)$ $\mathcal{G} (4.5)$ $\mathcal{G} (5.4)$
$\mathcal{S}^*_{inf, k}$ $0$ $0$ $0$ $-20\sqrt{3}$
$\mathcal{K}_{\mathcal{S}_{k}}$ $\{0\}$ $\{0\}$ $\{0\}$ $\mathcal{G} (20\sqrt{3})$
$k$ 1 2 3 4 5
$Z^*_k=Z^*_{sup, k}$ $0$ $0$ $C_{3}$ $C_{4}$ $C_{5}$
$Z^*_{inf, k}$ $0$ $0$ $0$ $-C_{4}$ $-C_{5}$
$\mathcal{K}_{Z_{k}}$ $\{0\}$ $\{0\}$ $\Phi(\mathcal{G} (C_{3}))$ $\mathcal{G} (C_{4})$ $\mathcal{G} (C_{5})$
$\mathcal{S}^*_{k}=\mathcal{S}^*_{sup, k}$ $0$ $0$ $5C_{3}$
$\mathcal{S}^*_{inf, k}$ $0$ $0$ $0$
$\mathcal{K}_{\mathcal{S}_{k}}$ $\{0\}$ $\{0\}$ $\Phi(\mathcal{G} (5C_{3}))$
Table 4. The LIL and FLIL limits for (Ⅲ) in Example 2
$Z^*_{k}=Z^*_{sup, k}$ $0$ $0$ $D_{3}$ $D_{4}$ $D_{5}$
$Z^*_{inf, k}$ $0$ $0$ $-D_{3}$ $-D_{4}$ $-D_{5}$
$\mathcal{K}_{Z_{k}}$ $0$ $0$ $\mathcal{G}(D_{3})$ $\mathcal{G}(D_{4})$ $\mathcal{G}(D_{5})$
$\mathcal{S}^*_{k}=\mathcal{S}^*_{sup, k}$ 0 0 $D$
$\mathcal{S}^*_{inf, k}$ $0$ $0$ $-D$
$\mathcal{K}_{\mathcal{S}_{k}}$ $0$ $0$ $\mathcal{G}(D)$
Chihurn Kim, Dong Han Kim. On the law of logarithm of the recurrence time. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 581-587. doi: 10.3934/dcds.2004.10.581
Zengjing Chen, Weihuan Huang, Panyu Wu. Extension of the strong law of large numbers for capacities. Mathematical Control & Related Fields, 2019, 9 (1) : 175-190. doi: 10.3934/mcrf.2019010
J. S. Athreya, Anish Ghosh, Amritanshu Prasad. Ultrametric logarithm laws I. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 337-348. doi: 10.3934/dcdss.2009.2.337
Jayadev S. Athreya, Gregory A. Margulis. Logarithm laws for unipotent flows, Ⅱ. Journal of Modern Dynamics, 2017, 11: 1-16. doi: 10.3934/jmd.2017001
Jayadev S. Athreya, Gregory A. Margulis. Logarithm laws for unipotent flows, I. Journal of Modern Dynamics, 2009, 3 (3) : 359-378. doi: 10.3934/jmd.2009.3.359
Welington Cordeiro, Manfred Denker, Michiko Yuri. A note on specification for iterated function systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3475-3485. doi: 10.3934/dcdsb.2015.20.3475
Anna Erschler. Iterated identities and iterational depth of groups. Journal of Modern Dynamics, 2015, 9: 257-284. doi: 10.3934/jmd.2015.9.257
Ronen Peretz, Nguyen Van Chau, L. Andrew Campbell, Carlos Gutierrez. Iterated images and the plane Jacobian conjecture. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 455-461. doi: 10.3934/dcds.2006.16.455
Santos González, Llorenç Huguet, Consuelo Martínez, Hugo Villafañe. Discrete logarithm like problems and linear recurring sequences. Advances in Mathematics of Communications, 2013, 7 (2) : 187-195. doi: 10.3934/amc.2013.7.187
Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018
Zengjing Chen, Yuting Lan, Gaofeng Zong. Strong law of large numbers for upper set-valued and fuzzy-set valued probability. Mathematical Control & Related Fields, 2015, 5 (3) : 435-452. doi: 10.3934/mcrf.2015.5.435
Afaf Bouharguane. On the instability of a nonlocal conservation law. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 419-426. doi: 10.3934/dcdss.2012.5.419
JÓzsef Balogh, Hoi Nguyen. A general law of large permanent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5285-5297. doi: 10.3934/dcds.2017229
Bertrand Lods, Clément Mouhot, Giuseppe Toscani. Relaxation rate, diffusion approximation and Fick's law for inelastic scattering Boltzmann models. Kinetic & Related Models, 2008, 1 (2) : 223-248. doi: 10.3934/krm.2008.1.223
Frank Jochmann. Power-law approximation of Bean's critical-state model with displacement current. Conference Publications, 2011, 2011 (Special) : 747-753. doi: 10.3934/proc.2011.2011.747
Jie Huang, Marco Donatelli, Raymond H. Chan. Nonstationary iterated thresholding algorithms for image deblurring. Inverse Problems & Imaging, 2013, 7 (3) : 717-736. doi: 10.3934/ipi.2013.7.717
Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems & Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291
María Jesús Carro, Carlos Domingo-Salazar. The return times property for the tail on logarithm-type spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2065-2078. doi: 10.3934/dcds.2018084
Ambroise Vest. On the structural properties of an efficient feedback law. Evolution Equations & Control Theory, 2013, 2 (3) : 543-556. doi: 10.3934/eect.2013.2.543
Ethan Akin. Good strategies for the Iterated Prisoner's Dilemma: Smale vs. Markov. Journal of Dynamics & Games, 2017, 4 (3) : 217-253. doi: 10.3934/jdg.2017014
Yongjiang Guo Yuantao Song | CommonCrawl |
Downregulating expression of OPTN elevates neuroinflammation via AIM2 inflammasome- and RIPK1-activating mechanisms in APP/PS1 transgenic mice
Long-Long Cao1,
Pei-Pei Guan1,
Shen-Qing Zhang1,
Yi Yang1,
Xue-Shi Huang1 &
Pu Wang1
Journal of Neuroinflammation volume 18, Article number: 281 (2021) Cite this article
Neuroinflammation is thought to be a cause of Alzheimer's disease (AD), which is partly caused by inadequate mitophagy. As a receptor of mitophagy, we aimed to reveal the regulatory roles of optineurin (OPTN) on neuroinflammation in the pathogenesis of AD.
BV2 cells and APP/PS1 transgenic (Tg) mice were used as in vitro and in vivo experimental models to determine the regulatory roles of OPTN in neuroinflammation of AD. Sophisticated molecular technologies including quantitative (q) RT-PCR, western blot, enzyme linked immunosorbent assay (ELISA), co-immunoprecipitation (Co-IP) and immunofluorescence (IF) were employed to reveal the inherent mechanisms.
As a consequence, key roles of OPTN in regulating neuroinflammation were identified by depressing the activity of absent in melanoma 2 (AIM2) inflammasomes and receptor interacting serine/threonine kinase 1 (RIPK1)-mediated NF-κB inflammatory mechanisms. In detail, we found that expression of OPTN was downregulated, which resulted in activation of AIM2 inflammasomes due to a deficiency in mitophagy in APP/PS1 Tg mice. By ectopic expression, OPTN blocks the effects of Aβ oligomer (Aβo) on activating AIM2 inflammasomes by inhibiting mRNA expression of AIM2 and apoptosis-associated speck-like protein containing a C-terminal caspase recruitment domain (ASC), leading to a reduction in the active form of caspase-1 and interleukin (IL)-1β in microglial cells. Moreover, RIPK1 was also found to be negatively regulated by OPTN via ubiquitin protease hydrolysis, resulting in the synthesis of IL-1β by activating the transcriptional activity of NF-κB in BV2 cells. As an E3 ligase, the UBAN domain of OPTN binds to the death domain (DD) of RIPK1 to facilitate its ubiquitination. Based on these observations, ectopically expressed OPTN in APP/PS1 Tg mice deactivated microglial cells and astrocytes via the AIM2 inflammasome and RIPK-dependent NF-κB pathways, leading to reduce neuroinflammation.
These results suggest that OPTN can alleviate neuroinflammation through AIM2 and RIPK1 pathways, suggesting that OPTN deficiency may be a potential factor leading to the occurrence of AD.
The pathological characteristics of Alzheimer's disease (AD) are widely believed to be the deposition of extracellular β-amyloid protein (Aβ) and intracellular hyperphosphorylated tau, which is triggered by the impaired mitochondrial function [1,2,3]. Actually, the biogenesis of mitochondria was impaired via deactivating PGC1α-NRF-TFAM pathway in AD patients [4]. Furthermore, mitochondria dysfunction induces the production and deposition of Aβ in the brains of AD animals [5,6,7], suggesting the relationship between mitochondria impairment and AD. Reciprocally, the deposited Aβ in β-amyloid plaques (APs) may cause further mitochondrial dysfunction and impair mitochondrial biogenesis [8]. With regard to the mechanism, it is caused by the lacking of mitophagy, which is responsible for recycling and removing the impaired mitochondria via autophagy [9]. As a selective autophagy pathway, mitophagy deficiency was found to be related to AD by impairing synaptic function and memory [10, 11]. Moreover, activating mitophagy reduces the formation of APs and neurofibrillary tangles (NFTs) in AD patients and animals [10]. Except for AD, impaired mitochondria is recruited to mitophagy via a series of mitophagy receptors, such as the ubiquitin-binding receptors optineurin (OPTN) and p62 (SQSTM1) et al. [12]. Along these lines, mitophagy impairment has been suggested as a critical event in initiating AD, and restoring the function of mitophagy might be helpful for ameliorating the syndrome of AD [10, 13].
During this process, neuroinflammation seems to exert pivotal roles in accelerating the progression of AD. Consistent with this hypothesis, microglial cells have already been activated in the preclinical stage of AD as shown by positron emission tomography (PET) imaging [14]. In addition, inflammatory factors secreted by activated microglia can further exacerbate neuronal death [15]. In the brain of AD patients, many activated microglial cells have been observed around APs [16, 17]. This is ascribed to the existence of Aβ receptors, such as NOD-like receptors (NLRs) [18], Toll-like receptors (TLRs) [19] and receptor for advanced glycation end products (RAGE) [20], on the surface of microglial cells. Mechanistically, Aβ has the ability to penetrate the cell membrane of microglial cells to bind with the intracellular domains of NLRs, which activate NOD-, LRR- and pyrin domain-containing 3 (NLRP3) inflammasomes, leading to the release of proinflammatory cytokines, such as interleukin-1β (IL-1β), and resulting in neuronal death [21]. In cultured microglial cells, Aβ1–42 fibrils promote the maturation and secretion of tumor necrosis factor α (TNF-α) and IL-1β through TLR2, TLR4, TLR6 and CD36 receptors [22, 23]. In addition, RAGE can bind to Aβ to induce neurotoxicity by facilitating the secretion of IL-1β and TNF-α in microglial cells [24]. These findings all emphasize the important roles of neuroinflammation in AD.
Given the critical roles of neuroinflammation in AD, pyroptosis is a novel discovered mode of programmed cell death that is characterized by the rapid rupture of the cell membrane, leading to the release of proinflammatory factors. During the process of pyroptosis, cell lysis is primarily mediated by caspase-1 [25,26,27]. Caspase-1 activation has the ability to form pores of different sizes in the cell membrane [25], resulting in a reduction in the concentration of intracellular ions and an increase in osmotic pressure and cell swelling, leading to dissolution and release of inflammatory substances [28]. This observation is consistent with the phenomenon that the size of cells was increased when they were dying from pyroptosis [29]. This mechanism is supported by the fact that glycine specifically blocks ion flow in damaged eukaryotic cells, which prevents swelling and lysis during pinocytosis, acting as a cytoprotective agent [30]. Similar to this observation, the cytoskeleton is impaired in response to pyroptosis [31], even though caspase-1 is not involved in chromatin DNA cleavage during the course of pyroptosis [32].
Although pyroptosis mediates the effects of neuroinflammation on neuronal death, questions regarding the mechanisms by which cells sense intracellular and extracellular "danger" signals remain [33]. Among inflammatory pathways, TLRs can activate NF-κB, MAPK and interferon regulatory factors (IRFs), through which they induce the release of proinflammatory cytokines, such as NO, by activating microglia [34]. Similar to TLR4, the signaling cascades of inflammasomes are activated when NLRs nucleotide-binding oligomerization domain-containing protein 1 (NOD1) and NOD2 are recognized and activated by their ligands, leading to the release of proinflammatory cytokines, such as IL-1β and IL-18, through which pyroptosis occurs [35]. In addition, both TLRs and NODs can induce the production and accumulation of pro-IL-1β in microglial cells. Moreover, activation of TLR2 or TLR4 alone can promote the maturation and secretion of IL-1β, which is mediated by stimulating caspase-1 and accelerating the release of endogenous adenosine triphosphate [36]. Due to the central roles of caspase-1 and IL-1β in inflammasomes, all of this evidence indicates the involvement of TLRs and NODs in activating inflammasomes.
The inflammasome is composed of multiple cytoplasmic proteins, including NOD-like receptors (NLRs) or absent in melanoma 2 (AIM2), caspase-1 and apoptosis-associated speck-like protein containing a C-terminal caspase recruitment domain (ASC), which activate and maintain immunity under both physiological and pathological conditions [37]. Notably, ASC is responsible for recruiting NLRs to caspase-1 [38]. Based on their different N-terminal domains, the NLR family can be divided into several subfamilies, such as NLRP and AIM2. For NLRP subfamily proteins, the N-terminus contains a pyrin domain (PYD), which interacts with the PYD domain of ASC, leading to the formation of a complex with pro-caspase-1. Meanwhile, NLRPs contain a caspase recruitment domain (CARD), which can directly bind to pro-caspase-1 [39]. AIM2 inflammasomes can bind to DNA through their HIN200 domain, which mediates ASC oligomerization to initiate the activation of caspase-1-dependent inflammatory bodies, leading to the maturation and secretion of the proinflammatory cytokines IL-1β and IL-18 [40].
During the activation of inflammasomes, TLRs mediate the effects of ligands or endogenous stimulators, such as Myd88 and TRIF, on the activation of NF-κB or AP-1, resulting in upregulation of NLRP3 expression [41]. By activating NLRP3, large protein complexes of inflammatory bodies are formed by recruiting ASC and caspase-1 into the inflammasomes, leading to the hydrolytic activity of caspase-1, which results in cleavage of pro-IL-1β and pro-IL-18 to produce mature and bioactive forms of IL-1β and IL-18 [37]. In addition to this canonical inflammasome-activating pathway, lipopolysaccharide (LPS) has the ability to bind and activate caspase-11 [37], which results in the aggregation of NLRP3, leading to accelerated maturation and secretion of IL-1β and IL-18 [42]. Activated caspase-11 can also induce cleavage of the N-terminal region of gasdermin-D (GSDMD), leading to pyroptosis [41]. As the natural ligand of TLR4, LPS has been proposed to activate the NLRP3 inflammasome via Toll-like receptor adaptor molecule 1 (TICAM1/TRIF)-, receptor interacting serine/threonine kinase 1 (RIPK1)-, FAS-associated death domain protein (FADD)- and caspase-8-activating pathways [36, 43]. In addition to NLRP3 inflammasomes, AIM2 inflammasomes have the ability to activate signaling cascades of cGAS-STING-TBK1-NF-κB, leading to the synthesis of proinflammatory cytokines, such as IFN-β and IL-1β [44].
In AD transgenic mice, activation of the NLRP3 inflammasome promotes the production and deposition of Aβ, and knocking out expression of NLRP3 improves spatial memory by reducing the deposition of Aβ in the brain [45]. Reciprocally, the deposition of Aβ in microglial cells activates NLRP3 inflammasomes, leading to the maturation and secretion of IL-1β [21], which accelerates AD progression. Due to the interaction between RIPK1 and the NLRP3 inflammasome [46], RIPK1 is likely involved in regulating the pathogenesis of AD via inflammatory mechanisms. Indeed, mRNA and protein expression of RIPK1 is markedly increased in AD patients compared to control subjects [47]. Since RIPK1 is highly expressed in microglial cells in mouse and human brains [48], it is generally believed that RIPK1 plays an important role in neuroinflammation [49]. Inhibiting the kinase activity of RIPK1 induces microglia microglial cells to degrade Aβ [48].
Based on these clues, we identified OPTN for the first time as an essential receptor for mitophagy that regulates neuroinflammation. Specifically, OPTN deficiency in AD activates AIM2 inflammasomes and RIPK1-mediated inflammation. Furthermore, OPTN expression blocks the activation of AIM2 inflammasomes by inhibiting the expression of AIM2, ASC, caspase-1 and IL-1β in microglial cells. In addition, OPTN suppresses the translocation of NF-κB from the cytosol to the nucleus by inducing ubiquitin-dependent RIPK1 degradation mechanisms. Through these mechanisms, OPTN overexpression suppresses neuroinflammation during the course of AD development and progression.
Horseradish peroxidase-labeled secondary antibodies were purchased from Sigma-Aldrich (St. Louis, MO, USA). Antibody specific for OPTN (sc-166576, mouse, 1:1000 for WB) were obtained from Santa Cruz Biotechnology. Antibodies against AIM2 (sc-293174, mouse, 1:500 for WB), ASC (sc-514414, mouse, 1:500 for WB) were obtained from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Fluorescence-tagged secondary antibodies (A32732 rabbit, A11034 rabbit, A32727 mouse, A11029 mouse, 1:500 for IF) were purchased from Thermo Fisher Scientific (Waltham, MA, USA). Other antibodies, including β-actin (#3700, mouse, 1:2,000 for WB), Histone (#4499, rabbit, 1:2000 for WB), IL-1β (#12242, mouse, 1:2000 for WB), RIPK1 (#3493, rabbit, 1:2000 for WB), p-IKBα (#2859, rabbit, 1:2000 for WB), IKBα (#4814, mouse, 1:2000 for WB), GFAP (#80788, rabbit, 1:5000 for WB), ubiquitin (#3933, rabbit, 1:1000 for WB) were from Cell Signaling Technology (Danvers, MA, USA). An Iba1 antibody (rabbit, 1:200 for IHC) was purchased from Wako Life Sciences (Wako, Tokyo, Japan). Antibody against caspase-1 (22915-1-AP, rabbit, 1:1000 for WB) and Flag (66008-3-Ig, mouse, 0.5–4.0 ug for IP and 1:2000 for WB), and HA (51064-2-AP, rabbit, 0.5–4.0 ug for IP and 1:4000 for WB) were purchased from Proteintech (Wuhan, Hubei, P.R.C). All reagents for the sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) experiments were purchased from Bio-Rad Laboratories (Hercules, CA, USA). Antibody specific against NF-κB (ab16502, rabbit, 1:2000 for WB) and HSP60 (ab190828, rabbit, 1:1000 for WB) were obtained from Abcam (Cambridge, MA, USA). DAPI was obtained from Beyotime Institute of Biotechnology (Haimen, Jiangsu, China). Bafilomycin A1 (Baf A1) was obtained from MedChemExpress (Monmouth Junction, NJ, USA). High-fidelity restriction enzymes for XmaI, SalI, XhoI, and HindIII were obtained from New England Biolabs (Beverly, MA, USA). All reagents for the quantitative real-time PCR (qRT-PCR) experiments were purchased from Bio-Rad Laboratories (Hercules, CA, USA). All other reagents were from ThermoFisher Scientific (Waltham, MA, USA) unless specified otherwise.
Tg mice and treatments
APP/PS1 (Stock No. 004462) were obtained from the Jackson Laboratory (Bar Harbor, ME, USA). Wild-type (WT) mice were purchased from Liaoning Changsheng Biotechnology Co., Ltd. (Benxi, Liaoning, China). The neurons in the brains of APP/PS1 Tg mice doubly expressed a chimeric mouse/human amyloid precursor protein (Mo/HuAPP695swe) and a mutant human PS1 (PS1-dE9). Both mutations are associated with the early onset of AD. Tg mice showed deposition of Aβ at approximately 6–7 months of age. At 9 months of age, APP/PS1 Tg mice showed obvious impairment of learning ability compared with that observed in WT mice. Five mice per cage were housed in a controlled environment at standard room temperature and relative humidity with a 12-h light–dark cycle and free access to food and water. The general health and body weights of the animals were monitored daily. The brains of animals in different groups were collected under anesthesia. Subsequently, the brains were fixed through perfusion, as previously described [50].
Aβ42 oligomer preparation
Aβo was generated as in previous study [51]. β-amyloid42 was purchased from ChinaPeptides Co., Ltd. (Shanghai, China). Specifically, lyophilized Aβ42 were dissolved in 1,1,1,3,3,3-hexafluoro2-propanol (HFIP; Sigma-Aldrich St. Louis, MO, USA) to 1 mM. The solution was bisected, the HFIP was evaporated, and the peptide was stored at − 80 °C. 24 h prior to use, amyloid peptide was dissolved (100 M) in dimethyl sulfoxide (DMSO; Sigma-Aldrich St. Louis, MO, USA) and ultrasound treatment. The solution was diluted to 20 M in F12/DMEM (glutamate-free, Kibbutz Beit-Haemek, Israel) and incubated at 4 °C for 24 h to obtain the Aβ oligomer (Aβo). The quality of oligomers product was controlled by Western blot using against Aβ peptide antibody (Sigma-Aldrich St. Louis, MO, USA).
Mouse BV2 cells were grown (37 °C and 5% CO2) on 6 cm tissue culture dishes (1 × 106 cells per dish) in appropriate medium. In a separate set of experiments, the cells were grown in serum-free medium for an additional 24 h before incubation with inhibitors in the absence or presence of Aβo [52].
Culture of primary microglia cells
The plates were incubated with 0.01% l-polylysine at 37 °C for 4 h. Remove the l-polylysine solution, rinse the culture dish with sterilized deionized water, and dry it for standby. The mice born within 24 h were disinfected with 75% (volume fraction) alcohol. The mice were decapitated under aseptic conditions. The brain tissue was taken out and placed in a cold plate of pH 7.2, d-hank's solution without calcium and magnesium (with ice bag under it). The cerebellum, hippocampus and cerebral medulla were removed under aseptic conditions, and then the cerebral cortex was obtained. The meninges and blood vessels were carefully stripped. The tissue was cut into 1 mm3 tissue pieces with iris scissors, and then digested with 0.125% trypsin and DNA enzyme at 37 °C for 20 min, and shaking for 2–3 times. Discard the supernatant, add complete inoculation solution to stop digestion, rinse twice. After resuspension with complete medium, let it stand for 2 min. Then the suspension was carefully collected in a new centrifuge tube, centrifuged (1000 rpm/min, 10 min, 4 °C), and the supernatant was discarded. Add the complete culture medium, and then resuspend and filter with 200 mesh stainless steel mesh. The cell filtrate was inoculated into the culture dish. After 24 h of incubation in 5% CO2 incubator at 37 °C, the medium was changed, and then the medium was changed every 3 days. The cells were cultured for 14–16 days. The culture medium was poured out and digested with 0.05% trypsin (2–3 ml). When the microglia attached to astrocytes were detached, the digestion medium containing floating microglia was transferred into a 10 ml centrifuge tube, and the digestion was immediately terminated with complete medium. 1000 rpm/min, centrifugation for 5 min, discard the supernatant, add complete culture, blow into cell suspension again, inoculate in the coated culture dish, and place in CO2 constant temperature cell incubator (37 °C). After 24 h, the culture medium was sucked out to remove the adherent oligodendrocytes, and the complete culture medium was added to continue the culture.
Acquisition of fluorescence images
Cells grown on gelatin-coated coverslips, which were co-transfected with green fluorescent protein-OPTN (GFP-OPTN) and RFP-RIPK1 or pCHAC-mt-mkeima, respectively. After 48 h, the cells were fixed with 4% paraformaldehyde for 10 min, permeabilized with 0.1% Triton X-100 for 10 min, washed three times with phosphate-buffered saline PBS (−), and stained with DAPI. Images were captured and processed using a confocal microscope system obtained from Leica (Wetzlar, Germany), equipped with a 63 × 1.4 numerical aperture oil differential interference contrast Plan-Apochromat objective at room temperature.
Co-immunoprecipitation (CoIP)
Transfected 293T or infected BV2 cells were lysed in lysis buffer (50 mM Tris–hydrochloride, pH 7.4, 150 mM sodium chloride [NaCl], 1% Nonidet P-40, 0.25% sodium deoxycholate, and 1 mM ethylenediaminetetraacetic acid, with protease and phosphatase inhibitors) for 1 h. The lysed cells were then centrifuged to remove cell debris. Protein amounts were quantified using a bicinchoninic acid protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA). A fraction (1/10) of the cell lysate supernatant was heated as whole-cell lysate for immunoblotting. Co-immunoprecipitation was performed by incubating the remaining lysate supernatant (9/10) with 1 mg of the indicated antibodies or control IgG overnight at 4 °C for 16 h. The immune complexes were captured using Dynabeads obtained from Thermo Fisher Scientific (Waltham, MA, USA) for 3 h at 4 °C, under gentle shaking. The immunoprecipitates were washed five times, eluted by addition of sample buffer, boiled, and analyzed through SDS-PAGE. For ubiquitination, cells were initially lysed with radioimmunoprecipitation assay (RIPA) buffer containing 1% SDS. Subsequently, the cell extracts were diluted with RIPA buffer to 0.1% SDS. Finally, a fraction of the diluted extracts (1/10) was heated as whole-cell lysate for immunoblotting and the remained lysate (9/10) was subjected to immunoprecipitation.
Flow cytometry detection
WT or APP/PS1 Tg mice were killed using tribromoethanol and perfused using PBS, and the hippocampus and cerebral cortex were removed. The obtained tissues were washed twice in DMEM serum-free medium and then quickly cut into 1 mm3 pieces using a scalpel. The cells were incubated at 37 °C in 5 ml DMEM containing 2.5 mg trypsin and 5 mg collagenase for 20 min. The digestive reaction was stopped by adding DMEM containing 10% serum, and the cell suspension was strained through a 40 μm cell sieve. The filtrate was centrifuged and resuspended in 6 ml HSSS and carefully placed on top of the Optiprep gradient solution. The tubes containing the cells and gradient solution were centrifuged in a Thermo Fisher centrifuge for 15 min at 1900 rpm. The lipid and debris layers were carefully aspirated, and the glial layer at the bottom was resuspended in 2.5 ml HBSS solution. After washing with HBSS, cells were fixed in 4% paraformaldehyde for 1 h and then incubated in 1% Triton X-100 for 1 h. Finally, OPTN and Iba1 antibodies were added and incubated at 4 °C for 4 h after blocking with goat serum for 1 h. After washing with PBS, the fluorescent secondary antibody was added and incubated at room temperature for 1 h. The samples were analyzed using a FACSCalibur (Becton–Dickinson) flow cytometer. For all samples, 5000 events per sample were read, and Flow Jo was used to complete the data analysis.
Measurement of the IL-1β concentration
The IL-1β levels in the media of both the control cells and the Aβo-treated cells were determined using the IL-1β enzyme immunoassay kits obtained from R&D Systems (Minneapolis, MN, USA), following the manufacturer's instructions. The results were expressed as ng IL-1β per ml medium.
Quantitative real-time PCR (qRT-PCR)
Real-time PCR assays were performed on the MiniOpticon Real-Time PCR detection system (Bio-Rad) using total RNA and the GoTaq One-step Real-Time PCR kit with SYBR green (Promega, Madison, WI, USA) [52]. The gene expression levels were normalized to those of glyceraldehyde-3-phosphate dehydrogenase (GAPDH). GAPDH (NM_001289726.1) F-AACTTTGGCATTGTGGAAGG, R-ACACATTGGGGGTAGGAACA; AIM2 (NM_001013779) F-TGGAGGTCACCAGTTCCTCA, R-TTCCTCTGTTATCTTCTGGACTTT; ASC (NM_023258) F-GTCGTATGGCTTGGAGCTCA, R-CCACAGCTCCAGACTCTTCT; OPTN (NM_001356487) F-GGAGGCAGTAGACAGTCCCT, R-CACTTGGGGCAGGAGTGAAT; RIPK1 (NM_001359997) F-GCCAGTAGCAGATGACCTCA, R-GCTTGGTGTCTGGAAGTCGA. The ratio was calculated using the following equation:
$${\text{Ratio}} = \frac{{2^{{\Delta {\text{Ct}}({\text{Gene}}_{{{\text{Control}}}} - {\text{Gene}}_{{{\text{Experiment}}}} )}} }}{{2^{{\Delta {\text{Ct}}({\text{GAPDH}}_{{{\text{Control}}}} - {\text{GAPDH}}_{{{\text{Experiment}}}} )}} }}.$$
The untreated sample was always set to 1, and the value of the treatment group was obtained from the previous equation.
Western blotting analysis
Tissues or cells were lysed in RIPA buffer (25 mM Tris–HCl [pH 7.6], 150 mM NaCl, 1% Nonidet P-40, 1% sodium deoxycholate, and 0.1% SDS), containing a protease inhibitor cocktail (Thermo Fisher Scientific, Waltham, MA, USA). The protein content of the cell lysates was determined using a bicinchoninic acid protein assay reagent (Thermo Fisher Scientific, Waltham, MA, USA). The total cell lysates (15 μg) were subjected to SDS-PAGE, transferred to a membrane, and incubated with a panel of specific antibodies. Each membrane was probed with only one antibody, and β-actin was used as a loading control. All western blotting experiments were performed at least in triplicate, with a different cell preparation each time.
Mouse brains were collected from 3-month-old APP/PS1-Control-AAV or APP/PS1-OPTN-AAV Tg mice and immobilized with 4% paraformaldehyde. Serial sections (thickness: 10 μm) were cut using a cryostat (CM1850; Leica, Wetzlar, Germany). The slides were rehydrated in a graded series of ethanol and submerged in 3% hydrogen peroxide, to eliminate the activity of endogenous peroxidase. The levels of Iba1 and GFAP were determined using an immunohistochemical staining kit, according to the instructions provided by the manufacturer (Invitrogen, Carlsbad, CA, USA) [53].
Preparation of lentiviral particles
Lentiviral vectors encoding the mouse OPTN, HA-OPTN, and Flag-RIPK1 genes, as well as a control lentiviral vector were provided by Keygen Biotech. Co. (Nanjing, China). The lentiviral vectors were purified and co-transfected with packaging vectors (psPAX2 and pMD2.G) into HEK293T cells. After 48 h and 72 h, the lentiviral particles in the supernatant were concentrated through ultracentrifugation and resuspended in PBS (−). For knocking down the expression of OPTN, the lentiviral particles that contained sh-OPTN or control shRNA were adjusted to 106–107 titers prior to infecting BV2 cells.
Purification of adeno-associated virus (AAV)
Recombinant AAV-OPTN was generated via triple transfection of HEK293T cells with pAOV-OPTN, pAAV-RC9, and pHelper vectors using Lipofectamine 2000 Thermo Fisher Scientific (Waltham, MA, USA), Waltham, MA, USA). Viral particles were harvested from the media 72 h after transfection. Cell pellets were resuspended in 10 mM Tris with 2 mM magnesium dichloride, pH 8, freeze-thawed three times, and treated with 100 U/ml Benzonase at 37 °C for 1 h. After centrifugation at 13,000 g for 10 min, the supernatants were collected. The combined media and supernatants were concentrated through precipitation with 10% polyethylene glycol 8000 (Sigma-Aldrich, St. Louis, MO, USA) and 500 mM NaCl. After centrifugation at 15,000 g for 30 min, the precipitated virus was suspended in 10 mM Tris with 2 mM magnesium dichloride. The virus particles were purified using gradient (15%, 25%, 40%, and 60%) iodixanol (Sigma-Aldrich, St. Louis, MO, USA). The viral titers were determined through qRT-PCR. The results were expressed as DNA resistance particles/ml.
Intracerebroventricular (i.c.v) injection of AAV-OPTN
Particles of AAV-OPTN viruses were injected (i.c.v) to APP/PS1 mice. In brief, stereotaxic injections were performed at the following coordinates from the bregma: mediolateral, 2.10 mm; anteroposterior, 2.00 mm; and dorsoventral, 2.28 mm. Following injection, each mouse was allowed to spontaneously recover on a heated pad. The reliability of the injection sites was validated by injecting trypan blue dye (Invitrogen, Carlsbad, CA, USA) in separate cohorts of mice and observing the staining in the cerebral ventricles. At 25 days post-injection, the mice were euthanized while under anesthesia and perfused with PBS (−) [54].
Plasmids for HA-tagged OPTN and their fragment, Flag-tagged RIPK1 and their fragment were cloned into the plvx-IRES-zsgreen vector for transient expression in HEK293T cells using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). In the control experiments, plvx-IRES-zsgreen plasmids were transfected to BV2 cells via similar methods. The transfected cells were allowed to recover for ≥ 12 h in growth medium and subsequently incubated overnight in serum-free medium prior to treatment with Aβo before extraction.
Infection with lentiviral particles
BV2 cells were seeded in 24-well plates at a density of 2 × 105 cells/well. Lentiviral particles and 8 μg/ml polybrene (Sigma-Aldrich, St. Louis, MO, USA) were added to the culture and centrifuged for 90 min at 1.5 × 103 rpm. The supernatant was removed immediately after infection and replaced with basal medium (Invitrogen, Carlsbad, CA, USA) containing 10% fetal bovine serum and 50% conditional medium. The efficiency of the infection was determined via qRT-PCR and western blotting after 72 h.
All animals were managed according to the Care and Use of Medical Laboratory Animals (Ministry of Health, Beijing, China) and all experimental protocols were approved by the Laboratory Ethics Committee of Northeastern University of China.
All data are presented as the mean ± standard error of at least three independent experiments. The statistical significance of the differences between the means was determined using Student's t test or one-way analysis of variance, as appropriate. In cases of significantly different means, multiple pairwise comparisons were performed using Tukey's post hoc test.
Expression of OPTN is downregulated in the brains of AD patients and APP/PS1 Tg mice
With increasing age, cells gradually become senescent, resulting in the disruption of various cell functions, such as overloading the degradation ability of the proteasome, leading to activation of the autophagy system to remove accumulated misfolded proteins, intracellular aggregates and irreparably damaged organelles [55]. In addition, many proteins that are mutated in neurodegenerative diseases are related to autophagy or lysosomal function [56, 57]. Given these observations, we searched the GEO database to integrate transcriptome sequencing data of AD patients. All of the collected data were divided into four groups: the entorhinal cortex (GSE26927, GSE48350, GSE5281, GSE26972), frontal cortex (GSE12685, GSE15222, GSE33000, GSE36980, GSE48350, GSE5281), hippocampus (GSE28146, GSE29378, GSE36980, GSE48350, GSE5281), and temporal cortex (GSE15222, GSE36980, GSE5281, GSE37263). As an autophagy receptor, the OPTN gene was identified to be significantly decreased in AD patients compared to healthy controls (Fig. 1A–D). To further validate these results, we measured the expression of OPTN in 9-month-old APP/PS1 Tg mice. The results demonstrated that the mRNA and protein expression of OPTN was downregulated in the cerebral cortex and hippocampus of 9-month-old APP/PS1 Tg mice compared to that of WT mice (Fig. 1E–G), which is consistent with the transcriptomic sequencing data from AD patients (Fig. 1A–D).
OPTN expression is downregulated in AD patients and APP/PS1 transgenic mice. A–D Transcriptome data of the entorhinal cortex, hippocampus, frontal cortex, and temporal cortex in AD patients were analyzed after normalization. E–H Nine-month-old APP/PS1 transgenic mice were anesthetized and euthanized to obtain the cerebral cortex and hippocampus. E Expression of OPTN in the cerebral cortex and hippocampus of APP/PS1 transgenic mice was detected by qRT-PCR using GAPDH as an internal control. F The protein level of OPTN in the cerebral cortex and hippocampus of APP/PS1 transgenic mice was assessed by western blotting using β-actin as an internal control. G ImageJ software was used to semiquantitatively analyze the fold change of OPTN relative to β-actin. H WT or APP/PS1 Tg mice were double-stained for Iba1 (green) and OPTN (red). I Expression of OPTN in microglia in the cortex and hippocampus of APP/PS1 transgenic mice at 9 months of age was detected by flow cytometry. The data represent the means ± S.E. of independent experiments. APP/PS1 transgenic mice were compared with WT mice *P < 0.05, **P < 0.01
Because OPTN expression is downregulated in the brains of AD patients and APP/PS1 Tg mice, we further determined its origin. For this purpose, a double-immunofluorescence labeling technique was used to determine the localization of OPTN in the brains of 9-month-old APP/PS1 Tg mice. By immunostaining the brains with OPTN together with either NeuN, Iba1 or GFAP, we found that OPTN colocalized with Iba1 (Fig. 1H), and its expression in microglial cells was downregulated, as shown by flow cytometry (Fig. 1I). This result suggests that OPTN is primarily expressed in microglial cells but not neurons or astrocytes in the brains of 9-month-old APP/PS1 Tg mice.
OPTN is critical for mitophagy
As an essential receptor for mitophagy [58], we initially investigated the roles of OPTN in autophagy. By overexpressing the mitochondrial keima protein in BV-2 cells, we found that Aβo treatment markedly blocked the fusion between mitochondria and lysosomes, as evidenced by attenuation of the red fluorescent signals at 543 nm, which is activated by the acidic environment of the lysosomal cavity (Fig. 2A). When we overexpressed OPTN in Aβo-treated BV-2 cells, the red fluorescence was restored, indicating the recovery of autophagic function (Fig. 2A). To further validate these observations, we measured the expression of heat shock protein 60 (HSP60). Using western blot, we revealed that OPTN overexpression blocked the effects of Aβo on inducing the protein expression of HSP60 in treated BV2 cells (Fig. 2B). Based on these observations, OPTN was identified to be essential for mitophagy.
OPTN alleviates Aβo-induced dysfunction of mitochondrial autophagy. A, B BV2 cells were treated with Aβo in the absence or presence of transfection with plvx-IRES-OPTN. A mKeima fluorescence was evoked using two excitation filters (438 ± 12 nm and 550 ± 15 nm) and a 610LP emission filter. B Expression levels of HSP60 and OPTN were detected by western blotting. β-actin served as an internal control. The data represent the means ± S.E. of independent experiments. BV2 cells treated with vehicle group or Aβo and overexpressing OPTN were compared with group Aβo alone *P < 0.05, **P < 0.01, ***P < 0.001
AIM2 inflammasomes are activated during the progression of AD
By disrupting mitochondrial autophagy, impaired mitochondria release reactive oxygen species (ROS), free radicals and mitochondrial DNA (mtDNA) into the cytoplasm due to a lack of mitochondrial clearing mechanisms, potentially contributing to inflammation [59]. Indeed, AIM2 has been reported to bind to free cytoplasmic DNA through its HIN200 domain, resulting in the oligomerization of ASC and leading to the formation of caspase-1-dependent inflammatory bodies and the maturation and secretion of proinflammatory cytokines, such as IL-1β and IL-18 [60]. For these reasons, we assessed the activity of AIM2 inflammasomes during the course of AD development and progression. By analyzing the GEO database, expression of AIM2 and ASC was found to be upregulated in AD patients compared to healthy controls (Fig. 3A–D). To confirm the results in the database, we further determined the mRNA and protein expression of AIM2 inflammasome components, including AIM2, ASC, caspase-1 and IL-1β, in the cerebral cortex and hippocampus of 9-month-old APP/PS1 Tg mice. Using qRT-PCR and western blots, the results demonstrated that mRNA and protein expression of AIM2 and ASC was upregulated in the cerebral cortex and hippocampus of APP/PS1 Tg mice compared to WT controls (Fig. 3E–H). The cleaved active form of caspase-1 is produced from pro-caspase-1 in APP/PS1 Tg mice (Fig. 3E, G). Moreover, the protein abundance of mature IL-1β was enhanced in APP/PS1 Tg mice (Fig. 3E, G). These findings indicate that AIM2 inflammasomes are activated in the AD brains.
The AIM2 inflammasome is activated in AD patients and APP/PS1 transgenic mice. A–D Brain transcriptome data from patients with AD and controls were collected from the GEO database and normalized for analysis. E–H APP/PS1 transgenic mice at the age of 9 months were anesthetized and euthanized to obtain the cerebral cortex and hippocampus. E Detection of the expression levels of AIM2, ASC, pro-caspase-1, caspase-1 and IL-1β in the cerebral cortex by western blotting. β-actin served as the internal control. In the right panel, ImageJ software was used to semiquantitatively analyze the western blotting results. F qRT-PCR was used to detect the mRNA expression of AIM2 and ASC in the cerebral cortex. GAPDH served as internal control. G The expression of AIM2, ASC, pro-caspase-1, caspase-1 and IL-1β in the hippocampus was detected by western blotting. β-actin served as the internal control. H The mRNA expression of AIM2 and ASC in the hippocampus was detected by qRT-PCR. GAPDH was used as the internal control. The data present means ± S.E. of independent experiment. APP/PS1 transgenic mice were compared with WT mice *P < 0.05, **P < 0.01, ***P < 0.001
Aβo activates AIM2 inflammasomes
Since the AIM2 inflammasome was activated in the brains of AD patients and APP/PS1 Tg mice, we logically next explored the effects of Aβ on the activation of inflammasomes. In BV2 cells, Aβo treatment clearly induced mRNA and protein expression of both AIM2 and ASC (Fig. 4A, C). Similarly, the active form of caspase-1 and mature IL-1β were produced in Aβo-treated BV-2 cells (Fig. 4A, B, D). To further validate these observations, we performed similar experiments in primary cultured microglial cells treated with Aβo, and similar results were obtained (Fig. 4E–H). These results clearly indicate that Aβo activates AIM2 inflammasomes during the course of AD development and progression.
Aβo activates the AIM2 inflammasome. A–D BV2 cells were treated with Aβo for 12 h. A Expression levels of AIM2, ASC, pro-caspase-1 and caspase-1 were detected by western blotting. β-actin served as an internal control. B ImageJ software was used for semiquantitative analysis of western blots. C qRT-PCR was used to detect the mRNA expression of AIM2 and ASC with GAPDH as an internal control. D Secretion of IL-1β was evaluated by ELISA. E–H Primary microglia were treated with Aβo for 12 h. E Protein levels of AIM2, ASC, pro-caspase-1, and caspase-1 were detected by western blotting with β-actin as the internal control. F ImageJ software was used to semiquantitatively analyze the fold change in AIM2, ASC, pro-caspase-1 and caspase-1 relative to β-actin. G mRNA expression of AIM2 and ASC was detected by qRT-PCR with GAPDH as an internal control. H Secretion of IL-1β was detected by ELISA. The data present means ± S.M. of independent experiment. Aβo treatment were compared with vehicle treatment *P < 0.05, **P < 0.01, ***P < 0.001
OPTN blocks the effects of Aβo on activating AIM2 inflammasomes
Given the potential roles of OPTN in neuroinflammation [61], we further investigated its roles in Aβo-activated AIM2 inflammasomes. By knocking down the expression of OPTN in BV2 cells, Aβo exhibited enhanced ability to increase the active cleavage product of caspase-1 and mature IL-1β in supernatants (Fig. 5A–C). In whole-cell lysates, Aβo induced early activation of AIM2 and ASC and decreased the protein abundance of pro-caspase-1 in OPTN knockdown BV2 cells (Fig. 5A, D–G). These results revealed that OPTN deficiency facilitated the activation of AIM2 inflammasomes in Aβo-stimulated microglial cells.
OPTN alleviates the Aβo-induced AIM2 inflammasome. A–G OPTN was silenced in BV2 cells that were then treated with Aβo for 4, 8, and 12 h. A Western blotting was used to detect the protein expression of AIM2, ASC, OPTN and pro-caspase-1 in whole-cell lysates. Meanwhile, extracellular secretion of caspase-1 and IL-1β was assessed in the conditioned medium. B–G ImageJ software was used to semiquantitatively analyze the fold changes in caspase-1, IL-1β, AIM2, ASC, OPTN and pro-caspase-1 relative to β-actin. The data are presented as the mean ± S.M. of independent experiment. WT BV2 cells compared to OPTN knockdown, *P < 0.05, **P < 0.01, ***P < 0.001. H–N BV2 cells gradually increased their expression of OPTN in the absence or presence of Aβo for 12 h. H Western blotting was used to detect the protein expression of AIM2, ASC, OPTN and pro-caspase-1 in whole-cell lysates. Meanwhile, extracellular secretion of caspase-1 and IL-1β was detected in the cell culture medium. I–N ImageJ software was used to semiquantitatively analyze the fold change in caspase-1, IL-1β, AIM2, ASC, OPTN and pro-caspase-1 relative to β-actin. The data are presented as the means ± S.M. of independent experiment. The data present mean ± S.M. of independent experiment. Aβo treatment BV2 cells compared with vehicle treatment BV2 cells, *P < 0.05, **P < 0.01, ***P < 0.001
Reciprocally, we ectopically expressed OPTN in BV2 cells. In response to increasing OPTN protein levels, the active form of caspase-1 and the production of mature IL-1β were decreased in OPTN-overexpressing BV2 cells (Fig. 5H–J). In whole-cell lysates, the protein abundance of AIM2 and ASC was downregulated, and pro-caspase-1 accumulated due to the lack of cleavage into the active form of caspase-1 in OPTN-overexpressing cells (Fig. 5H, K–H). Although Aβo can activate AIM2 inflammasomes, it was unable to diminish the effects of OPTN overexpression on deactivating AIM2 inflammasomes (Fig. 5H–L). Based on these observations, OPTN alleviates the effects of Aβo on activating AIM2 inflammasomes in microglial cells.
As a downstream target of TLRs, RIPK1 expression is elevated in the brains of AD patients
As discussed above, TLRs mediate the effects of ligands or endogenous stimulators, such as Myd88 and TRIF, on activating NF-κB or AP-1, which results in the activation of inflammasomes [41]. Specifically, TLR2, 4 and 6 mediate the effects of Aβ1–42 fibrils on promoting the maturation and secretion of tumor necrosis factor α (TNF-α) and IL-1β in microglial cells [23, 62]. Moreover, LPS, the natural ligand of TLR4, has been proposed to activate inflammasomes via the Toll-like receptor adaptor molecule 1 (TICAM1/TRIF)-, receptor interacting serine/threonine kinase 1 (RIPK1)-, FAS-associated death domain protein (FADD)- and caspase-8-axes [43]. Based on these clues, we next explored the activity of TLRs in AD. GSEA of transcriptome sequencing data from AD patients in the GEO database demonstrated that TLR signaling pathways were markedly upregulated in the entorhinal cortex, frontal cortex and hippocampus (Fig. 6A–D). As a downstream target of TLRs, RIPK1 in this pathway plays critical roles in driving inflammation, apoptosis and necrosis [49, 63]. In addition, RIPK1 activation has been reported to be associated with neurodegenerative diseases via inflammation-activating mechanisms [49]. Therefore, it is necessary to determine the activity of RIPK1 during the course of AD development and progression. As expected, we found that expression of RIPK1 was significantly upregulated in the entorhinal and temporal cortex of AD patients compared to healthy controls (Fig. 6E, G). Even though it was not statistically significant, the average expression of RIPK1 was higher in the hippocampus and frontal cortex in AD patients than in healthy subjects (Fig. 6F, H). To further validate these observations, we determined the expression of RIPK1 in the brains of APP/PS1 Tg mice. The results demonstrated that mRNA and protein expression of RIPK1 was elevated in the cerebral cortex and hippocampus of APP/PS1 Tg mice compared to WT mice (Fig. 6I–K). These observations indicate that the signaling cascades of TLRs and RIPK1 are activated during the course of AD development and progression.
The Toll-like receptor pathway is upregulated in AD patients, and RIPK1 expression is increased in APP/PS1 transgenic mice. A–D Transcriptome sequencing data of the entorhinal cortex, hippocampal cortex, temporal cortex and frontal cortex tissues from AD patients were collected from the GEO database, and GSEA was performed after normalization. E–H The differences in RIPK1 expression in the transcriptome of the entorhinal cortex, hippocampus, frontal cortex and temporal cortex were normalized in AD patients. I–K Nine-month-old APP/PS1 Tg mice were anesthetized and euthanized to obtain the cerebral cortex and hippocampus. I mRNA expression of RIPK1 in the cerebral cortex and hippocampus was detected by qPCR using GAPDH as the internal control. J Protein levels of RIPK1 in the cerebral cortex and hippocampus were elucidated by western blotting with β-actin as the internal control. k ImageJ software was used to semiquantitatively analyze the fold change in RIPK1 relative to β-actin. The data present means ± S.M. of independent experiment. The data present mean ± S.M. of independent experiment. APP/PS1 Tg mice compared with WT mice, ***P < 0.001
OPTN deactivates the neuroinflammatory pathways of RIPK1
Since OPTN has been shown to depress the activities of AIM2 inflammasomes, we further elucidated its roles in regulating RIPK1-mediated inflammatory pathways. As a result, knocking down the expression of OPTN induced the accumulation of RIPK1 protein in the cytoplasm of BV2 cells (Fig. 7A–C). Overloading RIPK1 in the cytoplasm triggers the translocation from the cytoplasm to the nucleus by enhancing the phosphorylation of IκBα in BV2 cells (Fig. 7A, D–G). Interestingly, the mRNA expression of RIPK1 was not changed by knocking down the expression of OPTN in BV2 cells (Fig. 7H–I). Based on these observations, we further determined the roles of OPTN in the production of IL-1β in BV2 cells. Using ELISA, we determined that shRNA-mediated knockdown of OPTN expression elevated the production of IL-1β in BV2 cells (Fig. 7J). For this reason, knocking down the expression of OPTN promoted the transcriptional activity of NF-κB in BV2 cells (Fig. 7K). In OPTN knockdown cells, we further treated BV2 cells with Aβo for 12 h. Compared with to controls, Aβo treatment markedly induced protein accumulation of RIPK1 by slightly depressing the protein levels of OPTN in BV-2 cells (Fig. 7A–C). As a consequence, NF-κB translocates from the cytoplasm to the nucleus by enhancing the phosphorylation of IκBα, which results in activating the transcriptional activity of NF-κB, leading to the synthesis of IL-1β in BV-2 cells (Fig. 7A, D–G, J, K). More interestingly, the mRNA expression of RIPK1 was elevated in response to treatment with Aβo in BV2 cells (Fig. 7I).
OPTN negatively regulates RIPK1 inflammatory signaling pathways. A BV2 cells with OPTN silenced were treated with Aβo for 12 h. Then, OPTN, RIPK1, p-IκBα, and IκBα in the cytoplasm and NF-κB in the cytoplasm or nucleus were detected by western blot with β-actin as an internal control. B–G ImageJ software was used to semiquantitatively analyze the optical density of western blots. H–K OPTN-silenced BV2 cells were treated with Aβo for 12 h. H OPTN mRNA expression was detected by qRT-PCR using GAPDH as an internal control. I RIPK1 RNA expression was detected by qRT-PCR using GAPDH as an internal control. J Extracellular secretion of IL-1β was assessed by ELISA. K The binding activity of NF-κB was evaluated by dual-luciferase assay. The data are presented as the means ± S.M. of independent experiment. OPTN-silenced BV2 cells compared to control BV2 cells or Aβo-treated BV2 cells compared to vehicle BV2 cells, *P < 0.05, **P < 0.01, ***P < 0.001. L–V BV2 cells with ectopic overexpression of OPTN in the absence or presence of Aβo treatment for 12 h. L Protein levels of OPTN, RIPK1, p-IκBα, and IκBα in the cytoplasm and NF-κB in the cytoplasm or nucleus were detected by western blot using β-actin as an internal control. N–R ImageJ software was used to semiquantitatively analyze the western blot results. S mRNA expression of OPTN was detected by qRT-PCR using GAPDH as an internal control. T mRNA expression of RIPK1 was detected by qRT-PCR using GAPDH as an internal control. U Extracellular secretion of IL-1β was assessed by ELISA. V The binding activity of NF-κB was evaluated using a dual-luciferase assay. The data present means ± S.M. of independent experiment. OPTN overexpressed BV2 cells compared with control BV2 cells or Aβo-treated BV2 cells compared with vehicle BV2 cells, *P < 0.05, **P < 0.01, ***P < 0.001
To further validate the above observations, we overexpressed OPTN in BV2 cells. Ectopic overexpression of OPTN resulted in markedly decreased protein levels of RIPK1 without affecting the mRNA levels of RIPK1 in BV2 cells (Fig. 7L–N, S, T), leading to reduce translocation of NF-κB from the cytoplasm to the nucleus by dephosphorylating IκBα in BV2 cells (Fig. 7L, O–R). By depressing the transcriptional activity of NF-κB, the production of IL-1β was decreased in OPTN-overexpressing BV2 cells (Fig. 7U, V). In OPTN-overexpressing cells, we further treated BV2 cells with Aβo for 12 h. Treatment with Aβo blocked the effects of Aβo on inducing the protein accumulation of RIPK1 in BV2 cells (Fig. 7L–N). Similarly, phosphorylation of IκBα was also prevented by OPTN overexpression, which impaired NF-κB translocation from the cytoplasm to the nucleus in Aβo-treated BV2 cells (Fig. 7L, O–R). Consistent with these observations, OPTN overexpression partially suppressed the ability of Aβo to induce the synthesis of IL-1β and the transcriptional activity of NF-κB in BV2 cells (Fig. 7U, V). Notably, OPTN overexpression did not block the effects of Aβo on inducing the synthesis of IL-1β or the transcriptional activity of NF-κB in BV2 cells (Fig. 7U, V). Based on these observations, the NF-κB p65 subunit might not be a unique factor for the transcriptional activity of NF-κB or the secretion of IL-1β in BV2 cells.
OPTN promotes ubiquitin protease hydrolysis of RIPK1 through ubiquitination
As discussed above, genetic interventions of OPTN have the ability to negatively regulate the protein accumulation of RIPK1 without affecting the mRNA expression of RIPK1 in BV2 cells (Fig. 6A, C, I, L, N, T). These results indicate that OPTN does not regulate RIPK1 at the transcriptional level. Given this, we speculated that the protein-degrading process, such as autophagy or the ubiquitin proteasome-degrading pathway, might be disrupted by OPTN intervention. To this end, we treated BV2 cells with either bafilomycin (Bafi) A1 to block the lysosomal pathway of autophagy or MG132 to inhibit the proteasome pathway in OPTN-overexpressing cells. Treatment with BafiA1 significantly decreased protein levels in OPTN-overexpressing BV2 cells (Fig. 8A, B). In contrast, MG132 treatment blocked the effects of OPTN on reducing protein levels of RIPK1 in BV2 cells (Fig. 8A, B). Therefore, the ubiquitin proteasome pathway seems to contribute to mediating the effects of OPTN on RIPK1 degradation in microglial cells.
OPTN degrades RIPK1 through the ubiquitin proteasome pathway. A BV2 cells overexpressing OPTN were treated with bafilomycin A1 (200 nM) or MG132 (10 μM) for 6 h. The expression of RIPK1 and OPTN was detected by western blotting using β-actin as an internal control. B ImageJ software was used to semiquantitatively analyze the western blot results. C Flag-RIPk1 was ectopically expressed in OPTN-silenced BV2 cells, which were then immunoprecipitated using an anti-Flag antibody derived from mice. The precipitated protein was further detected using a ubiquitin antibody derived from rabbits. Flag antibody was used to detect RIPK1, and OPTN was used to detect the OPTN protein levels in the total protein. In the right panel, ImageJ software was used to semiquantitatively analyze ubiquitin levels. D OPTN was ectopically overexpressed in Flag-RIPk1-overexpressing BV2 cells, which were then immunoprecipitated using an anti-Flag antibody derived from mice. The precipitated protein was detected using rabbit-derived ubiquitin antibody. Flag antibodies were used to detect the protein levels of RIPK1, and OPTN was used to detect OPTN levels in whole-cell lysates. In the right panel, ImageJ software was used to semiquantitatively analyze ubiquitin levels. The data present means ± S.M. of independent experiment. OPTN silenced or overexpressed BV2 cells compared with control group, *P < 0.05, **P < 0.01
To confirm the above findings, BV2 cells were transfected with Flag-RIPK1 in the absence or presence of OPTN knockdown by shRNAs. Immunoprecipitation with an anti-Flag antibody revealed the amount of ubiquitin that was bound, which was probed and visualized by western blots. The results demonstrated that knocking down the expression of OPTN markedly disrupted the binding between RIPK1 and ubiquitin in BV2 cells (Fig. 8C). Reciprocally, binding between RIPK1 and ubiquitin was strengthened by overexpressing RIPK1 in BV2 cells (Fig. 8D). Based on these observations, OPTN facilitates the ubiquitination of RIPK1 in microglial cells.
The UBAN domain of OPTN and the death domain of RIPK1 mediate their interaction
Since optineurin (OPTN) is a ubiquitin-binding receptor protein [12, 64], we speculated that OPTN might be able to facilitate the ubiquitination of RIPK1 through a direct interaction. Using computational biology, we initially predicted the structures of OPTN and RIPK1. The C-score of RIPK1 was − 2.41, and the TM score was 0.43 ± 4.0 Å. For OPTN, the C-score was − 1.35, and the TM score was 0.55 ± 0.15 Å (Additional file 1: Fig. S1A). Based on their molecular structures, molecular docking was employed to predict the probability of binding between OPTN and RIPK1 using Haddock software [65]. The results demonstrated that the binding affinity, ΔG, was − 14.3 kcal/mol and that the dissociation constant Kd (M) at 25.0 °C was 3.1 × 10–11 (Additional file 1: Fig. S1B). Generally, the interaction is considered stable if Kd is less than 1 × 10–9. Therefore, OPTN may interact with RIPK1 according to the computational biology results. Analyzing the docking active amino acids, OPTN binds with RIPK1 through their N-termini and C-termini (Additional file 1: Fig. S1B).
To confirm the above prediction, we next overexpressed GFP-OPTN and mCherry-RIPK1 in BV2 cells. Using confocal microscopy, we found that OPTN colocalized with RIPK1 in BV2 cells (Fig. 9A). To further confirm their interaction, we immunoprecipitated OPTN using a specific antibody in BV2 cells. Using western blot, we found that RIPK1 co-immunoprecipitated with OPTN in BV2 cells, suggesting binding between OPTN and RIPK1 in microglial cells (Fig. 9B). In HEK293T cells, we cotransfected HA-OPTN and Flag-RIPK1. By immunoprecipitation using an anti-Flag antibody, we found that HA-OPTN co-immunoprecipitated with RIPK1, suggesting binding between OPTN and RIPK1 in HEK293T cells (Fig. 9C). From the results of the above immunoprecipitation, we fully confirmed the interaction between OPTN and RIPK1.
The UBAN domain of OPTN and the dead domain of RIPK1 mediate their interaction. A After 48 h of simultaneous overexpression of GFP-OPTN and mCherry-RIPK1 in BV2 cells, cells on the slide were fixed in PFA and stained with DAPI, followed by imaging using confocal microscopy. B The protein was precipitated using an OPTN antibody in BV2 cells and then detected using an RIPK1 antibody. C In HEK293T cells, both HA-OPTN and Flag-RIPK1 were overexpressed, which were then precipitated using an anti-Flag antibody and detected using an HA antibody. D Structural diagram of the OPTN protein. E Schematic diagram of RIPK1's protein structure. F Full-length HA-OPTN, NEMO-truncated OPTN or UBAN-truncated OPTN were ectopically expressed in Flag-RIPK1-transfected HEK293T cells. Then, the proteins were immunoprecipitated using an anti-Flag antibody, followed by detection with an anti-HA antibody. G Full-length RIPK1, RIPK1 with a truncated protein-like kinase domain and RIPK1 with a truncated dead domain were ectopically expressed in HA-OPTN-transfected 293 T cells. Then, the proteins were immunoprecipitated using an anti-HA antibody, followed by detection with an anti-Flag antibody
To determine the precise interacting domains, we initially analyzed the functional microdomains of OPTN and RIPK1. The results demonstrated that OPTN contains NEMO and UBAN domains, and RIPK1 contains protein kinase-like and death domains (Fig. 9D, E). According to this analysis, we established truncation fragments of HA-OPTN and Flag-RIPK1, which deleted the N- or C-terminal functional domains and were then cotransfected into HEK293T cells (Fig. 9F, G). By co-immunoprecipitation with either anti-Flag or anti-HA antibodies, we found that deletion of C-terminal functional domains disrupted the binding between OPTN and RIPK1 in HEK293T cells (Fig. 9F, G). Therefore, these results revealed that the UBAN domain of OPTN and the death domain of RIPK1 mediate their interaction in microglial cells.
Restoration of OPTN decreases neuroinflammation by deactivating AIM2 inflammasomes and inducing RIPK1-degrading pathways in glial cells in APP/PS1 Tg mice
Because the expression of OPTN is downregulated in APP/PS1 Tg mice, we aimed to determine the effects of OPTN restoration on neuroinflammation. By injecting AAV-OPTN into the hippocampus of APP/PS1 Tg mice, we found that protein levels of AIM2, ASC, active forms of caspase-1 and the production of IL-1β were all repressed (Fig. 10A, B and Additional file 1: Fig. S2A–E). Similarly, OPTN overexpression decreased NF-κB translocation from the cytoplasm to the nucleus by reducing phosphorylation of IκBα in a RIPK1-dependent mechanism in APP/PS1 Tg mice (Fig. 10C, D and Additional file 1: Fig. S2F–L). By immunohistochemistry, Iba1 staining demonstrated that OPTN overexpression deactivates microglial cells by increasing their endpoints and process length (Fig. 11A–C). Additionally, the activity of astrocytes was also inhibited in response to ectopically expressed OPTN in APP/PS1 Tg mice (Fig. 11D, E). Based on these observations, OPTN is critical for suppressing neuroinflammation via the AIM2 inflammasome and RIPK-dependent NF-κB pathways.
Overexpression of OPTN in the brains of APP/PS1 transgenic mice alleviates activation of the AIM2 inflammasome and RIPK1 pathways. A–D Three-month-old APP/PS1 transgenic mice were injected with OPTN or control adeno-associated virus in the hippocampus and cortex for 1 month. Brain tissue was collected after anesthesia euthanasia. A, B Western blotting was used to detect the protein expression of OPTN, AIM2, ASC and caspase-1 in the cerebral cortex and hippocampus of mice. β-actin served as the internal control. C, D Western blot analysis was used to determine levels of RIPK1, GFAP, p-IKBα, IKBα, IL-1β, and NF-κB in the cytoplasm and nucleus of mice. Histone and β-actin were used as internal controls for the nucleus and cytoplasm, respectively. The data present means ± S.M. of independent experiment. OPTN-AAV injected APP/PS1 Tg mice compared with control-AAV injected APP/PS1 Tg mice, *P < 0.05, **P < 0.01, ***P < 0.001
Overexpression of OPTN in the brains of APP/PS1 transgenic mice alleviates the activation of microglia and the number of astrocytes. A–E OPTN was ectopically overexpressed in the brains of 3-month-old APP/PS1 transgenic mice. A Iba1 immunohistochemical staining was performed in brain tissues of the OPTN-AAV-injected and control groups. B, C The activity of microglial cells was analyzed in hippocampal and cerebral cortex tissues. The data are presented as the means ± S.M. of independent experiment. D GFAP immunohistochemical staining was performed in the brains of the OPTN-overexpressing and control groups. E Statistical analysis of astrocytes. OPTN-AAV injected APP/PS1 Tg mice compared with control-AAV injected APP/PS1 Tg mcie, *P < 0.05, **P < 0.01
AD is a progressive neurodegenerative disease that causes dementia. The primary symptoms associated with the disease are progressive loss of cognitive function and memory. The pathological mechanism of the disease is generally thought to be deposition of Aβ and hyperphosphorylation of tau [66]. Aβ deposition results in a potential innate immunopathological response in AD [67]. Indeed, the immune response is primarily focused on the deposition of Aβ and tangles of neuronal fibers [67, 68]. For example, APs are often closely associated with activated microglial cells and surrounded by activated astrocytes. Additional evidence suggests that cytokines, including IL-1β and IL-18, may contribute to the pathogenesis of AD [69]. Furthermore, studies have demonstrated that Aβ can activate the inflammasome, resulting in the secretion of IL-1β [21]. Treatment of AD model mice with inflammasome inhibitors significantly attenuated the decline in memory and reduced the deposition of APs [70]. In a subsequent study, APP/PS1/NLRP3−/− mice exhibited reduced caspase-1 cleavage and Aβ deposition and enhanced phagocytosis of Aβ compared to APP/PS1 mice, providing evidence that NLRP3 plays an in vivo and exacerbating role in the pathogenesis of AD [45].
Based on these previous studies, we explored the activity of the AIM2 inflammasome in AD model mice. We found that the AIM2 inflammasome was also significantly activated, which was also confirmed in Aβ-treated microglial cells (Fig. 4). The AIM2 inflammasome was initially found to recognize and mediate the effects of pathogens [71] and host double-stranded DNA [72] on damaging the targeted tissues by inducing inflammation. Gradually, activation of the AIM2 inflammasome was found not to be restricted to only innate immune responses. In patients with type 2 diabetes, activation of the AIM2 inflammasome induced chronic inflammation [73]. In the brain, the AIM2 inflammasome contributes to neuroinflammation independently of NLRP3 [74], resulting in neuronal cell death [75]. Moreover, depletion of AIM2 in an AD mouse model mitigated the deposition of Aβ and microglial activation [76]. Our data revealed that the AIM2 inflammasome is activated in APP/PS1 Tg mice in response to the accumulation of Aβ in APs.
Similar to the NLRP3 inflammasome, the AIM2 inflammasome has also been identified as a multiprotein platform. Together with ASC and caspase-1, they induce the maturation of cytokines, such as IL-1β [40]. Consistently, we further found that production of the mature and active forms of caspase-1 and IL-1β was elevated in an Aβ-dependent manner (Fig. 4). In agreement with our observation, inflammatory factors, including caspase-1 and IL-1β, were upregulated following activation of the AIM2 inflammasome in patients infected with the hepatitis B virus [71]. Therefore, the AIM2 inflammasome is a potential essential mediator of neuroinflammation and a therapeutic target of AD.
With respect to activating AIM2 inflammasomes, OPTN deficiency was identified to be critical for this process in the brains of APP/PS1 Tg mice (Fig. 5). As a gene associated with normal tension glaucoma (NTG), OPTN was recently identified in NFTs and dystrophic neurites in AD patients, suggesting a new role for OPTN in the disease [77]. In addition to AD, OPTN was also identified to be expressed in the spinal cords of amyotrophic lateral sclerosis (ALS), Lewy bodies and Lewy neurites of Parkinson's disease, ballooned neurons of Creutzfeldt–Jakob disease, glial cytoplasmic inclusions of multiple system atrophy, and Pick bodies of Pick's disease, suggesting that OPTN may represent a more general marker for neurodegenerative diseases [78]. Although OPTN is widely distributed in neurodegenerative conditions, its significance is still obscure. Based on the above question, we initially extended prior works and demonstrated that the expression of OPTN was downregulated in the brains of AD patients and APP/PS1 Tg mice (Fig. 1A–G).
Given the above observation, we were prompted to determine OPTN's roles in regulating the activity of AIM2 inflammasomes. As an essential receptor for mitochondrial autophagy [12], we found that OPTN restoration blocked the effects of Aβo on disrupting the fusion between mitochondria and lysosomes (Fig. 2). OPTN is recruited to the damaged outer mitochondrial membrane by binding to ubiquitinated mitochondrial proteins in a PTEN-induced putative kinase 1 (PINK1)- and parkin RBR E3 ubiquitin protein ligase (PARK2)-activating manner [79]. OPTN then induces the formation of autophagosomes around damaged mitochondria by interacting with the domain of the LC3 interaction region (LIR) [64]. Conversely, depletion of endogenous OPTN inhibits LC3 recruitment to mitochondria, resulting in inhibition of mitochondrial degradation [64]. As upstream modulators of LC3, autophagic factors, including unc-51-like autophagy activating kinase 1 (ULK1), FYVE-containing protein 1 (DFCP1) and WD-repeat protein interacting with phosphoinositides (WIPI) family 1 (WIPI1), are also recruited to focal regions proximal to the mitochondria [12]. Moreover, phosphorylation of OPTN by tank-binding kinase 1 (TBK1) at Ser473 enables its binding to the ubiquitin chains of pS65, which results in recruitment of OPTN to the mitochondria in PINK1-driven and parkin-independent mitophagy [80]. Although the biological functions of OPTN in mitophagy have been reported to be associated with neurodegenerative diseases, including ALS [64], Parkinson's disease [79] and Huntington's disease [81], the significance of OPTN in AD has not been elucidated. Therefore, we extended prior work on OPTN to AD, demonstrating that Aβ disrupts the fusion between impaired mitochondria and lysosomes by suppressing the expression of OPTN in microglial cells (Fig. 2B).
In response to the disruption of mitochondrial autophagy, impaired mitochondria release reactive oxygen species (ROS), free radicals and mitochondrial DNA (mtDNA) into the cytoplasm due to a lack of mitochondrial clearance, which may contribute to inflammation [59]. Indeed, AIM2 has been reported to bind to free cytoplasmic DNA through its HIN200 domain, which induces the oligomerization of ASC, leading to the formation of caspase-1-dependent inflammatory bodies and the maturation and secretion of proinflammatory cytokines, such as IL-1β and IL-18 [60]. Consistently, we further found that OPTN deficiency is critical for activating AIM2 inflammasomes in microglial cells (Fig. 5A).
We observed that OPTN deficiency induces accumulation of the RIPK1 protein in microglial cells (Fig. 8). In contrast to AIM2 inflammasomes, OPTN degrades RIPK1 via a proteasome pathway (Fig. 8). In agreement with our observation, RIPK1 is activated in response to proteasome inhibition [82]. In addition, there is evidence that RIPK1-mediated Cst7 induction leads to lysosomal pathway damage, which further induces the disease-associated microglia (DAM) phenotype, including an enhanced inflammatory response and decreased phagocytic activity [48]. However, the inherent mechanisms are not yet understood in microglial cells in the context of AD. To address this gap in knowledge, we extended prior works and found that OPTN recruits ubiquitinated RIPK1 to proteasomes by the UBAN domain of OPTN and the death domain of RIPK1 (Fig. 9F, G). These new findings provide a reasonable explanation for the fact that RIPK1 is activated by inhibiting the proteasome in microglial cells.
Apart from the regulatory mechanisms, we further found that ectopically expressed OPTN inhibited the proinflammatory pathways of NF-κB by degrading RIPK1 in APP/PS1 Tg mice (Fig. 10). RIPK1 mediates axonal degeneration by promoting inflammation and necroptosis in ALS [82]. As a key regulator of innate immune signaling pathways, heterozygous RIPK1 mutations prevent caspase-8 cleavage of RIPK1 in humans, which promotes autoinflammatory diseases [83]. With respect to the critical roles of RIPK1 in inflammation, RIPK1 has been identified as a therapeutic target of monogenic and polygenic autoimmune, inflammatory, neurodegenerative, ischemic and acute conditions, such as sepsis, by the potential applications of RIPK1 inhibitors [84]. Based on these clues, we extended prior works and demonstrated that RIPK1 mediates the effects of OPTN on suppressing neuroinflammation.
All data generated or analyzed during this study are included in this published article.
AAV:
Adeno-associated virus
AIM2:
Absent in melanoma 2
ALS:
APs:
β-Amyloid plaques
Aβ:
β-Amyloid protein
Aβos:
Aβ oligomers
Bafi:
Bafilomycin
Contain a caspase recruitment domain
CoIP:
Co-immunoprecipitation
DD:
Death domain
FADD:
FAS-associated death domain protein
GAPDH:
Glyceraldehyde-3-phosphate dehydrogenase
GSDMD:
Gasdermin-D
IHC:
mtDNA:
NF-κB:
Nuclear factor kappa B
NLRs:
Nod-like receptors
NOD1:
NLRs nucleotide-binding oligomerization domain-containing protein 1
OPTN:
Optineurin
PYD:
Pyrin domain
RAGE:
Receptor for advanced glycation end products
RIPK1:
Receptor interacting serine/threonine kinase 1
TLRs:
Tumor necrosis factor α
Kerr JS, Adriaanse BA, Greig NH, Mattson MP, Cader MZ, Bohr VA, Fang EF. Mitophagy and Alzheimer's disease: cellular and molecular mechanisms. Trends Neurosci. 2017;40:151–66.
PubMed PubMed Central CAS Google Scholar
Lustbader JW, Cirilli M, Lin C, Xu HW, Takuma K, Wang N, Caspersen C, Chen X, Pollak S, Chaney M, et al. ABAD directly links Abeta to mitochondrial toxicity in Alzheimer's disease. Science. 2004;304:448–52.
PubMed CAS Google Scholar
Cen X, Chen Y, Xu X, Wu R, He F, Zhao Q, Sun Q, Yi C, Wu J, Najafov A, Xia H. Pharmacological targeting of MCL-1 promotes mitophagy and improves disease pathologies in an Alzheimer's disease mouse model. Nat Commun. 2020;11:5731.
Chornenkyy Y, Wang WX, Wei A, Nelson PT. Alzheimer's disease and type 2 diabetes mellitus are distinct diseases with potential overlapping metabolic dysfunction upstream of observed cognitive decline. Brain Pathol. 2019;29:3–17.
Yao J, Irwin RW, Zhao L, Nilsen J, Hamilton RT, Brinton RD. Mitochondrial bioenergetic deficit precedes Alzheimer's pathology in female mouse model of Alzheimer's disease. Proc Natl Acad Sci USA. 2009;106:14670–5.
Mao P, Manczak M, Calkins MJ, Truong Q, Reddy TP, Reddy AP, Shirendeb U, Lo HH, Rabinovitch PS, Reddy PH. Mitochondria-targeted catalase reduces abnormal APP processing, amyloid β production and BACE1 in a mouse model of Alzheimer's disease: implications for neuroprotection and lifespan extension. Hum Mol Genet. 2012;21:2973–90.
Leuner K, Schulz K, Schütt T, Pantel J, Prvulovic D, Rhein V, Savaskan E, Czech C, Eckert A, Müller WE. Peripheral mitochondrial dysfunction in Alzheimer's disease: focus on lymphocytes. Mol Neurobiol. 2012;46:194–204.
Chakravorty A, Jetto CT, Manjithaya R. Dysfunctional mitochondria and mitophagy as drivers of Alzheimer's disease pathogenesis. Front Aging Neurosci. 2019;11:311.
Ding WX, Yin XM. Mitophagy: mechanisms, pathophysiological roles, and analysis. Biol Chem. 2012;393:547–64.
Fang EF, Hou Y, Palikaras K, Adriaanse BA, Kerr JS, Yang B, Lautrup S, Hasan-Olive MM, Caponio D, Dan X, et al. Mitophagy inhibits amyloid-β and tau pathology and reverses cognitive deficits in models of Alzheimer's disease. Nat Neurosci. 2019;22:401–12.
Ye X, Sun X, Starovoytov V, Cai Q. Parkin-mediated mitophagy in mutant hAPP neurons and Alzheimer's disease patient brains. Hum Mol Genet. 2015;24:2938–51.
Lazarou M, Sliter DA, Kane LA, Sarraf SA, Wang C, Burman JL, Sideris DP, Fogel AI, Youle RJ. The ubiquitin kinase PINK1 recruits autophagy receptors to induce mitophagy. Nature. 2015;524:309–14.
Lautrup S, Sinclair DA, Mattson MP, Fang EF. NAD(+) in brain aging and neurodegenerative disorders. Cell Metab. 2019;30:630–55.
Hamelin L, Lagarde J, Dorothée G, Leroy C, Labit M, Comley RA, de Souza LC, Corne H, Dauphinot L, Bertoux M, et al. Early and protective microglial activation in Alzheimer's disease: a prospective study using 18F-DPA-714 PET imaging. Brain. 2016;139:1252–64.
Bamberger ME, Harris ME, McDonald DR, Husemann J, Landreth GE. A cell surface receptor complex for fibrillar beta-amyloid mediates microglial activation. J Neurosci. 2003;23:2665–74.
Dani M, Wood M, Mizoguchi R, Fan Z, Walker Z, Morgan R, Hinz R, Biju M, Kuruvilla T, Brooks DJ, Edison P. Microglial activation correlates in vivo with both tau and amyloid in Alzheimer's disease. Brain. 2018;141:2740–54.
Reddy PH, Beal MF. Amyloid beta, mitochondrial dysfunction and synaptic damage: implications for cognitive decline in aging and Alzheimer's disease. Trends Mol Med. 2008;14:45–53.
Heneka MT. Inflammasome activation and innate immunity in Alzheimer's disease. Brain Pathol. 2017;27:220–2.
Reed-Geaghan EG, Savage JC, Hise AG, Landreth GE. CD14 and toll-like receptors 2 and 4 are required for fibrillar A{beta}-stimulated microglial activation. J Neurosci. 2009;29:11982–92.
Origlia N, Bonadonna C, Rosellini A, Leznik E, Arancio O, Yan SS, Domenici L. Microglial receptor for advanced glycation end product-dependent signal pathway drives beta-amyloid-induced synaptic depression and long-term depression impairment in entorhinal cortex. J Neurosci. 2010;30:11414–25.
Halle A, Hornung V, Petzold GC, Stewart CR, Monks BG, Reinheckel T, Fitzgerald KA, Latz E, Moore KJ, Golenbock DT. The NALP3 inflammasome is involved in the innate immune response to amyloid-beta. Nat Immunol. 2008;9:857–65.
Sheedy FJ, Grebe A, Rayner KJ, Kalantari P, Ramkhelawon B, Carpenter SB, Becker CE, Ediriweera HN, Mullick AE, Golenbock DT, et al. CD36 coordinates NLRP3 inflammasome activation by facilitating intracellular nucleation of soluble ligands into particulate ligands in sterile inflammation. Nat Immunol. 2013;14:812–20.
Yang J, Wise L, Fukuchi KI. TLR4 cross-talk with NLRP3 inflammasome and complement signaling pathways in Alzheimer's disease. Front Immunol. 2020;11:724.
Lue LF, Walker DG, Brachova L, Beach TG, Rogers J, Schmidt AM, Stern DM, Yan SD. Involvement of microglial receptor for advanced glycation endproducts (RAGE) in Alzheimer's disease: identification of a cellular activation mechanism. Exp Neurol. 2001;171:29–45.
Fink SL, Bergsbaken T, Cookson BT. Anthrax lethal toxin and Salmonella elicit the common cell death pathway of caspase-1-dependent pyroptosis via distinct mechanisms. Proc Natl Acad Sci USA. 2008;105:4312–7.
Ren T, Zamboni DS, Roy CR, Dietrich WF, Vance RE. Flagellin-deficient Legionella mutants evade caspase-1- and Naip5-mediated macrophage immunity. PLoS Pathog. 2006;2:e18.
Mariathasan S, Weiss DS, Dixit VM, Monack DM. Innate immunity against Francisella tularensis is dependent on the ASC/caspase-1 axis. J Exp Med. 2005;202:1043–9.
Fink SL, Cookson BT. Caspase-1-dependent pore formation during pyroptosis leads to osmotic lysis of infected host macrophages. Cell Microbiol. 2006;8:1812–25.
Sun GW, Lu J, Pervaiz S, Cao WP, Gan YH. Caspase-1 dependent macrophage death induced by Burkholderia pseudomallei. Cell Microbiol. 2005;7:1447–58.
van der Velden AW, Velasquez M, Starnbach MN. Salmonella rapidly kill dendritic cells via a caspase-1-dependent mechanism. J Immunol. 2003;171:6742–9.
Edgeworth JD, Spencer J, Phalipon A, Griffin GE, Sansonetti PJ. Cytotoxicity and interleukin-1beta processing following Shigella flexneri infection of human monocyte-derived dendritic cells. Eur J Immunol. 2002;32:1464–71.
Watson PR, Gautier AV, Paulin SM, Bland AP, Jones PW, Wallis TS. Salmonella enterica serovars Typhimurium and Dublin can lyse macrophages by a mechanism distinct from apoptosis. Infect Immun. 2000;68:3744–7.
Bergsbaken T, Fink SL, Cookson BT. Pyroptosis: host cell death and inflammation. Nat Rev Microbiol. 2009;7:99–109.
Higaki H, Choudhury ME, Kawamoto C, Miyamoto K, Islam A, Ishii Y, Miyanishi K, Takeda H, Seo N, Sugimoto K, et al. The hypnotic bromovalerylurea ameliorates 6-hydroxydopamine-induced dopaminergic neuron loss while suppressing expression of interferon regulatory factors by microglia. Neurochem Int. 2016;99:158–68.
Kufer TA, Sansonetti PJ. Sensing of bacteria: NOD a lonely job. Curr Opin Microbiol. 2007;10:62–9.
Netea MG, Nold-Petry CA, Nold MF, Joosten LA, Opitz B, van der Meer JH, van de Veerdonk FL, Ferwerda G, Heinhuis B, Devesa I, et al. Differential requirement for the activation of the inflammasome for processing and release of IL-1beta in monocytes and macrophages. Blood. 2009;113:2324–35.
Guo H, Callaway JB, Ting JP. Inflammasomes: mechanism of action, role in disease, and therapeutics. Nat Med. 2015;21:677–87.
Man SM, Kanneganti TD. Regulation of inflammasome activation. Immunol Rev. 2015;265:6–21.
Walsh JG, Muruve DA, Power C. Inflammasomes in the CNS. Nat Rev Neurosci. 2014;15:84–97.
Wang B, Yin Q. AIM2 inflammasome activation and regulation: a structural perspective. J Struct Biol. 2017;200:279–82.
Malik A, Kanneganti TD. Inflammasome activation and assembly at a glance. J Cell Sci. 2017;130:3955–63.
Broz P, Dixit VM. Inflammasomes: mechanism of assembly, regulation and signalling. Nat Rev Immunol. 2016;16:407–20.
Gaidt MM, Ebert TS, Chauhan D, Schmidt T, Schmid-Burgk JL, Rapino F, Robertson AA, Cooper MA, Graf T, Hornung V. Human monocytes engage an alternative inflammasome pathway. Immunity. 2016;44:833–46.
Gröschel MI, Sayes F, Shin SJ, Frigui W, Pawlik A, Orgeur M, Canetti R, Honoré N, Simeone R, van der Werf TS, et al. Recombinant BCG expressing ESX-1 of Mycobacterium marinum combines low virulence with cytosolic immune signaling and improved TB protection. Cell Rep. 2017;18:2752–65.
Heneka MT, Kummer MP, Stutz A, Delekate A, Schwartz S, Vieira-Saecker A, Griep A, Axt D, Remus A, Tzeng TC, et al. NLRP3 is activated in Alzheimer's disease and contributes to pathology in APP/PS1 mice. Nature. 2013;493:674–8.
Speir M, Lawlor KE. RIP-roaring inflammation: RIPK1 and RIPK3 driven NLRP3 inflammasome activation and autoinflammatory disease. Semin Cell Dev Biol. 2021;109:114–24.
Caccamo A, Branca C, Piras IS, Ferreira E, Huentelman MJ, Liang WS, Readhead B, Dudley JT, Spangenberg EE, Green KN, et al. Necroptosis activation in Alzheimer's disease. Nat Neurosci. 2017;20:1236–46.
Ofengeim D, Mazzitelli S, Ito Y, DeWitt JP, Mifflin L, Zou C, Das S, Adiconis X, Chen H, Zhu H, et al. RIPK1 mediates a disease-associated microglial response in Alzheimer's disease. Proc Natl Acad Sci USA. 2017;114:E8788-e8797.
Yuan J, Amin P, Ofengeim D. Necroptosis and RIPK1-mediated neuroinflammation in CNS diseases. Nat Rev Neurosci. 2019;20:19–33.
Wang X, Zheng W, Xie JW, Wang T, Wang SL, Teng WP, Wang ZY. Insulin deficiency exacerbates cerebral amyloidosis and behavioral deficits in an Alzheimer transgenic mouse model. Mol Neurodegener. 2010;5:46.
Opazo P, Viana da Silva S, Carta M, Breillat C, Coultrap SJ, Grillo-Bosch D, Sainlos M, Coussen F, Bayer KU, Mulle C, Choquet D. CaMKII metaplasticity drives Aβ oligomer-mediated synaptotoxicity. Cell Rep. 2018;23:3137–45.
Yu X, Guan P-P, Guo J-W, Wang Y, Cao L-L, Xu G-B, Konstantopoulos K, Wang Z-Y, Wang P. By suppressing the expression of anterior pharynx-defective-1α and-1β and inhibiting the aggregation of β-amyloid protein, magnesium ions inhibit the cognitive decline of amyloid precursor protein/presenilin 1 transgenic mice. FASEB J. 2015;29:5044–58.
Cao LL, Guan PP, Liang YY, Huang XS, Wang P. Calcium ions stimulate the hyperphosphorylation of tau by activating microsomal prostaglandin E synthase 1. Front Aging Neurosci. 2019;11:108.
Cao LL, Guan PP, Liang YY, Huang XS, Wang P. Cyclooxygenase-2 is essential for mediating the effects of calcium ions on stimulating phosphorylation of tau at the sites of Ser 396 and Ser 404. J Alzheimers Dis. 2019;68:1095–111.
Dikic I. Proteasomal and autophagic degradation systems. Annu Rev Biochem. 2017;86:193–224.
Trinh J, Farrer M. Advances in the genetics of Parkinson disease. Nat Rev Neurol. 2013;9:445–54.
Moors T, Paciotti S, Chiasserini D, Calabresi P, Parnetti L, Beccari T, van de Berg WD. Lysosomal dysfunction and α-synuclein aggregation in Parkinson's disease: diagnostic links. Mov Disord. 2016;31:791–801.
Padman BS, Nguyen TN, Uoselis L, Skulsuppaisarn M, Nguyen LK, Lazarou M. LC3/GABARAPs drive ubiquitin-independent recruitment of Optineurin and NDP52 to amplify mitophagy. Nat Commun. 2019;10:408.
Picca A, Calvani R, Coelho-Junior HJ, Landi F, Bernabei R, Marzetti E. Mitochondrial dysfunction, oxidative stress, and neuroinflammation: intertwined roads to neurodegeneration. Antioxidants (Basel). 2020;9:647.
Jin T, Perry A, Jiang J, Smith P, Curry JA, Unterholzner L, Jiang Z, Horvath G, Rathinam VA, Johnstone RW, et al. Structures of the HIN domain: DNA complexes reveal ligand binding and activation mechanisms of the AIM2 inflammasome and IFI16 receptor. Immunity. 2012;36:561–71.
Markovinovic A, Cimbro R, Ljutic T, Kriz J, Rogelj B, Munitic I. Optineurin in amyotrophic lateral sclerosis: multifunctional adaptor protein at the crossroads of different neuroprotective mechanisms. Prog Neurobiol. 2017;154:1–20.
Udan ML, Ajit D, Crouse NR, Nichols MR. Toll-like receptors 2 and 4 mediate Abeta(1–42) activation of the innate immune response in a human monocytic cell line. J Neurochem. 2008;104:524–33.
Degterev A, Ofengeim D, Yuan J. Targeting RIPK1 for the treatment of human diseases. Proc Natl Acad Sci USA. 2019;116:9714–22.
Wong YC, Holzbaur EL. Optineurin is an autophagy receptor for damaged mitochondria in parkin-mediated mitophagy that is disrupted by an ALS-linked mutation. Proc Natl Acad Sci USA. 2014;111:E4439-4448.
Vangone A, Bonvin AM. Contacts-based prediction of binding affinity in protein-protein complexes. Elife. 2015;4:e07454.
Weiner HL, Frenkel D. Immunology and immunotherapy of Alzheimer's disease. Nat Rev Immunol. 2006;6:404–16.
McGeer PL, McGeer EG. Inflammation, autotoxicity and Alzheimer disease. Neurobiol Aging. 2001;22:799–809.
Akiyama H, Barger S, Barnum S, Bradt B, Bauer J, Cole GM, Cooper NR, Eikelenboom P, Emmerling M, Fiebich BL, et al. Inflammation and Alzheimer's disease. Neurobiol Aging. 2000;21:383–421.
Freeman LC, Ting JP. The pathogenic role of the inflammasome in neurodegenerative diseases. J Neurochem. 2016;136(Suppl 1):29–38.
Hook VY, Kindy M, Hook G. Inhibitors of cathepsin B improve memory and reduce beta-amyloid in transgenic Alzheimer disease mice expressing the wild-type, but not the Swedish mutant, beta-secretase site of the amyloid precursor protein. J Biol Chem. 2008;283:7745–53.
Du W, Zhen J, Zheng Z, Ma S, Chen S. Expression of AIM2 is high and correlated with inflammation in hepatitis B virus associated glomerulonephritis. J Inflamm (Lond). 2013;10:37.
Fernandes-Alnemri T, Yu JW, Datta P, Wu J, Alnemri ES. AIM2 activates the inflammasome and cell death in response to cytoplasmic DNA. Nature. 2009;458:509–13.
Bae JH, Jo SI, Kim SJ, Lee JM, Jeong JH, Kang JS, Cho NJ, Kim SS, Lee EY, Moon JS. Circulating cell-free mtDNA contributes to AIM2 inflammasome-mediated chronic inflammation in patients with type 2 diabetes. Cells. 2019;8:328.
PubMed Central CAS Google Scholar
Denes A, Coutts G, Lenart N, Cruickshank SM, Pelegrin P, Skinner J, Rothwell N, Allan SM, Brough D. AIM2 and NLRC4 inflammasomes contribute with ASC to acute brain injury independently of NLRP3. Proc Natl Acad Sci USA. 2015;112:4050–5.
Adamczak SE, de Rivero Vaccari JP, Dale G, Brand FJ 3rd, Nonner D, Bullock MR, Dahl GP, Dietrich WD, Keane RW. Pyroptotic neuronal cell death mediated by the AIM2 inflammasome. J Cereb Blood Flow Metab. 2014;34:621–9.
Wu PJ, Hung YF, Liu HY, Hsueh YP. Deletion of the inflammasome sensor Aim2 mitigates Abeta deposition and microglial activation but increases inflammatory cytokine expression in an Alzheimer disease mouse model. NeuroImmunoModulation. 2017;24:29–39.
Liu YH, Tian T. Hypothesis of optineurin as a new common risk factor in normal-tension glaucoma and Alzheimer's disease. Med Hypotheses. 2011;77:591–2.
Osawa T, Mizuno Y, Fujita Y, Takatama M, Nakazato Y, Okamoto K. Optineurin in neurodegenerative diseases. Neuropathology. 2011;31:569–74.
Wong YC, Holzbaur EL. Temporal dynamics of PARK2/parkin and OPTN/optineurin recruitment during the mitophagy of damaged mitochondria. Autophagy. 2015;11:422–4.
Richter B, Sliter DA, Herhaus L, Stolz A, Wang C, Beli P, Zaffagnini G, Wild P, Martens S, Wagner SA, et al. Phosphorylation of OPTN by TBK1 enhances its binding to Ub chains and promotes selective autophagy of damaged mitochondria. Proc Natl Acad Sci USA. 2016;113:4039–44.
Faber PW, Barnes GT, Srinidhi J, Chen J, Gusella JF, MacDonald ME. Huntingtin interacts with a family of WW domain proteins. Hum Mol Genet. 1998;7:1463–74.
Ito Y, Ofengeim D, Najafov A, Das S, Saberi S, Li Y, Hitomi J, Zhu H, Chen H, Mayo L, et al. RIPK1 mediates axonal degeneration by promoting inflammation and necroptosis in ALS. Science. 2016;353:603–8.
Lalaoui N, Boyden SE, Oda H, Wood GM, Stone DL, Chau D, Liu L, Stoffels M, Kratina T, Lawlor KE, et al. Mutations that prevent caspase cleavage of RIPK1 cause autoinflammatory disease. Nature. 2020;577:103–8.
Mifflin L, Ofengeim D, Yuan J. Receptor-interacting protein kinase 1 (RIPK1) as a therapeutic target. Nat Rev Drug Discov. 2020;19:553–71.
This work was supported in part or in whole by the National Natural Science Foundation of China (CN) (81771167 and 81870840).
College of Life and Health Sciences, Northeastern University, No. 3-11. Wenhua Road, Shenyang, 110819, People's Republic of China
Long-Long Cao, Pei-Pei Guan, Shen-Qing Zhang, Yi Yang, Xue-Shi Huang & Pu Wang
Long-Long Cao
Pei-Pei Guan
Shen-Qing Zhang
Yi Yang
Xue-Shi Huang
Pu Wang
LLC performed the experiments, analyzed and interpreted the data, and drafted the manuscript. PPG, SQZ and YY carried out selected experiments. PW and XSH designed the experiments, interpreted the manuscript and wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Xue-Shi Huang or Pu Wang.
This study was carried out in accordance with the recommendations of the Care and Use of Medical Laboratory Animals (Ministry of Health, Beijing, China). The protocol was approved by the Laboratory Ethics Committee of Northeastern University.
AIM2 knocking down is responsible for decreasing the cleavage of caspase1 and the production of IL-1β in microglial cells. Figure S2. OPTN was downregulated during the course of AD development and progression. Figure S3. OPTN was expressed in microglial cells of mice. Figure S4. The expression of NLRP3, NLRP1, Pyrin and NLRC4 in AD patients and APP/PS1 Tg mice. Figure S5. The mRNA expressions of OPTN were analyzed in the familial or sporadic AD patients.
Cao, LL., Guan, PP., Zhang, SQ. et al. Downregulating expression of OPTN elevates neuroinflammation via AIM2 inflammasome- and RIPK1-activating mechanisms in APP/PS1 transgenic mice. J Neuroinflammation 18, 281 (2021). https://doi.org/10.1186/s12974-021-02327-4
Proteasome degradation | CommonCrawl |
The European Physical Journal C
August 2013 , 73:2514 | Cite as
Novel discrete symmetries in the general Open image in new window supersymmetric quantum mechanical model
R. Kumar
R. P. Malik
Regular Article - Theoretical Physics
First Online: 08 August 2013
In addition to the usual supersymmetric (SUSY) continuous symmetry transformations for the general Open image in new window SUSY quantum mechanical model, we show the existence of a set of novel discrete symmetry transformations for the Lagrangian of the above SUSY quantum mechanical model. Out of all these discrete symmetry transformations, a unique discrete transformation corresponds to the Hodge duality operation of differential geometry and the above SUSY continuous symmetry transformations (and their anticommutator) provide the physical realizations of the de Rham cohomological operators of differential geometry. Thus, we provide a concrete proof of our earlier conjecture that any arbitrary Open image in new window SUSY quantum mechanical model is an example of a Hodge theory where the cohomological operators find their physical realizations in the language of symmetry transformations of this theory. Possible physical implications of our present study are pointed out, too.
Discrete Symmetry Symmetry Transformation Exterior Derivative Duality Transformation Hodge Theory
R.K. would like to express his deep gratitude to the UGC, Government of India, for the financial support through the SRF scheme.
Appendix: On the derivation of Open image in new window SUSY algebra
The whole of algebraic structures in Sect. 6 are based on the basic Open image in new window SUSY algebra \(Q^{2} = \bar{Q}^{2} = 0, \{Q, \bar{Q}\} = H, [H, Q] = [H, \bar{Q}] = 0\) which is satisfied on the on-shell. To corroborate this statement, first of all, auxiliary variable A in the Lagrangian (10) is replaced by (−W′) due to the equation of motion A=−W′ (which emerges from (10) itself). Furthermore, the symmetry transformations s 1 and s 2 (cf. (11)) are modified a bit by the overall constant factors. Thus, we have the following different looking Lagrangian:
$$\begin{aligned} L_0 = \frac{1}{2} {\dot{x}}^2 + i\bar{\psi}\dot{\psi}- \frac {1}{2} \bigl(W'\bigr)^2 + W'' \bar{\psi}\psi, \end{aligned}$$
(A.1)
which remains invariant under the following transformations:
$$\begin{aligned} \begin{aligned} & s_1 x = -\frac{1}{\sqrt{2}}i \psi, \qquad s_1 \bar{\psi}= \frac{1}{\sqrt{2}}\bigl(\dot{x} - iW'\bigr), \qquad s_1 \psi = 0, \\ & s_2 x = + \frac{1}{\sqrt{2}}i \bar{\psi}, \qquad s_2 \psi = - \frac{1}{\sqrt{2}} \bigl(\dot{x} + iW'\bigr),\\ & s_2 \bar{\psi}= 0. \end{aligned} \end{aligned}$$
The above transformations are nilpotent of order two (i.e. \(s_{1}^{2} = s_{2}^{2} = 0\)) only when the equations of motion \(\dot{\psi}- i W'' \psi = 0, \dot {\bar{\psi}} + i W'' \bar{\psi}= 0 \) are used. It can be checked that \(s_{1} L_{0} = d/dt (W^{\prime}\psi/ \sqrt{2} )\) and \(s_{2} L_{0} = d/dt (i \bar{\psi}\dot{x}/ \sqrt{2} )\). Hence, the action integral S=∫dtL 0 remains invariant under s 1 and s 2.
The conserved Noether charges, which emerge corresponding to (A.2), are
$$\begin{aligned} Q = -\frac{1}{\sqrt{2}} \bigl(i\dot{x} + W' \bigr) \psi, \qquad \bar{Q} = +\frac{1}{\sqrt{2}} \bar{\psi}\bigl(i\dot{x} - W' \bigr). \end{aligned}$$
These charges are same as quoted in (15) except the fact that A has been replaced by (−W′) (due to the equation of motion from the Lagrangian (10)) and the constant factors \((\mp 1/\sqrt{2})\) have been included for the algebraic convenience. It can be readily checked that the above charges are the generator for the transformations (A.2) because we have the following relationships:
$$\begin{aligned} s_1 \varPhi = \pm i [\varPhi, Q ]_\pm, \qquad s_2 \varPhi = \pm i [\varPhi, \bar{Q} ]_\pm, \end{aligned}$$
where the generic variable Φ corresponds to the variables \(x, \psi, \bar{\psi}\) and the subscripts (±) on square brackets stand for the (anti)commutator depending on the generic variable Φ being (fermionic) bosonic in nature. The (±) signs, in front of the brackets, are also chosen judiciously (see, e.g. [21] for details).
The structure of the specific Open image in new window SUSY algebra now follows when we exploit the basic relationship (A.4). In other words, we observe the following:
$$\begin{aligned} \begin{aligned} & s_1 Q = i \{Q, Q \} = 0\quad \Longrightarrow\quad Q^2 = 0, \\ & s_1 \bar{Q} = i \{\bar{Q}, \bar{Q} \} = 0\quad \Longrightarrow\quad \bar{Q}^2 = 0, \\ & s_1 \bar{Q} = i \{\bar{Q}, Q \} = iH \quad\Longrightarrow\quad \{\bar{Q}, Q \} = H, \\ & s_2 Q = i \{Q, \bar{Q} \} = iH \quad\Longrightarrow\quad \{Q, \bar{Q} \} = H, \end{aligned} \end{aligned}$$
where H is the Hamiltonian (corresponding to the Lagrangian (A.1)). The explicit form of H can be mathematically expressed as
$$\begin{aligned} H =& \frac{1}{2} {\dot{x}}^2 + \frac {1}{2} \bigl(W'\bigr)^2 - W'' \bar{\psi}\psi \\ \equiv & \frac{1}{2} p^2 + \frac {1}{2} \bigl(W'\bigr)^2 - W'' \bar{\psi}\psi, \end{aligned}$$
where \(p = \dot{x}\) is the momentum corresponding to the variable x. We also lay emphasis on the fact that we have exploited the inputs from equations of motion \(\dot{\psi}- i W'' \psi = 0, \dot {\bar{\psi}} + i W'' \bar{\psi}= 0 \) in the derivation of H from the Legendre transformation \(H = \dot{x} p + \dot{\psi}\varPi_{\psi}+ \dot{\bar{\psi}} \varPi_{\bar{\psi}} - L\) where \(\varPi_{\psi}= - i \bar{\psi}\) and \(\varPi_{\bar{\psi}} = 0\). The derivation of specific Open image in new window SUSY algebra (cf. (A.5)) is very straightforward because we have used only (A.2) and (A.3) in the calculation of l.h.s. of (A.5) from which, the results of the r.h.s. (i.e. specific Open image in new window SUSY algebra) trivially ensue.
We end this appendix with the remarks that the specific Open image in new window SUSY algebra \(Q^{2} = \bar{Q}^{2} = 0, \{Q, \bar{Q}\} = H\), listed in (A.5), is valid only on the on-shell where the validity of the Euler–Lagrange equations of motion is taken into account. Furthermore, it may be trivially noted that, for the choices W′=ωx and W′=ωf(x) in the Lagrangian (A.1), we obtain the Lagrangians for the SUSY harmonic oscillator and its generalization in [5]. For the description of the motion of a charged particle in the X−Y plane under the influence of a magnetic field along the Z-direction, the choice for W′ could be found in the standard books on SUSY quantum mechanics and relevant literature (see, e.g. [2, 3]).
E. Witten, Nucl. Phys. B 188, 513 (1981) ADSCrossRefzbMATHGoogle Scholar
F. Cooper, A. Khare, U. Sukhatme, Phys. Rep. 251, 264 (1995) MathSciNetADSCrossRefGoogle Scholar
A. Das, Field Theory: A Path Integral Approach (World Scientific, Singapore, 1993) Google Scholar
R. Kumar, R.P. Malik, Europhys. Lett. 98, 11002 (2012) ADSCrossRefGoogle Scholar
R.P. Malik, A. Khare, Ann. Phys. 334, 142 (2013) ADSCrossRefGoogle Scholar
R.P. Malik, Int. J. Mod. Phys. A 22, 3521 (2007) MathSciNetADSCrossRefzbMATHGoogle Scholar
R.P. Malik, Mod. Phys. Lett. A 15, 2079 (2000) MathSciNetADSCrossRefzbMATHGoogle Scholar
R.P. Malik, Mod. Phys. Lett. A 16, 477 (2001) MathSciNetADSCrossRefGoogle Scholar
S. Gupta, R.P. Malik, Eur. Phys. J. C 58, 517 (2008) MathSciNetADSCrossRefzbMATHGoogle Scholar
R. Kumar, S. Krishna, A. Shukla, R.P. Malik, Eur. Phys. J. C 72, 2188 (2012) ADSCrossRefGoogle Scholar
R. Kumar, S. Krishna, A. Shukla, R.P. Malik. arXiv:1203.5519 [hep-th]
R.P. Malik, J. Phys. A, Math. Gen. 41, 4167 (2001) MathSciNetADSCrossRefGoogle Scholar
E. Witten, Commun. Math. Phys. 17, 353 (1988) MathSciNetADSCrossRefGoogle Scholar
A.S. Schwarz, Lett. Math. Phys. 2, 247 (1978) ADSCrossRefzbMATHGoogle Scholar
F. Cooper, B. Freedman, Ann. Phys. 146, 262 (1983) MathSciNetADSCrossRefGoogle Scholar
A. Lahiri, P.K. Roy, B. Bagchi, Int. J. Mod. Phys. A 5, 1383 (1990) MathSciNetADSCrossRefGoogle Scholar
T. Eguchi, P.B. Gilkey, A. Hanson, Phys. Rep. 66, 213 (1980) MathSciNetADSCrossRefGoogle Scholar
S. Mukhi, N. Mukunda, Introduction to Topology, Differential Geometry and Group Theory for Physicists (Wiley, New Delhi, 1990) zbMATHGoogle Scholar
K. Nishijima, Prog. Theor. Phys. 80, 897 (1988) MathSciNetADSCrossRefGoogle Scholar
S. Deser, A. Gomberoff, M. Henneaux, C. Teitelboim, Phys. Lett. B 400, 80 (1997) MathSciNetADSCrossRefGoogle Scholar
S. Gupta, R. Kumar, R.P. Malik. arXiv:0908.2561 [hep-th]
F. Correa, V. Jakubsky, L. Nieto, M.S. Plyushchay, Phys. Rev. Lett. 101, 030403 (2008) MathSciNetADSCrossRefGoogle Scholar
F. Correa, V. Jakubsky, M.S. Plyushchay, J. Phys. A 41, 485303 (2008) MathSciNetCrossRefGoogle Scholar
M. de Crombrugghe, V. Rittenberg, Ann. Phys. 151, 99 (1983) ADSCrossRefGoogle Scholar
A. Khare, J. Maharana, Nucl. Phys. B 244, 409 (1984) MathSciNetADSCrossRefGoogle Scholar
R.P. Malik et al. (in preparation) Google Scholar
© Springer-Verlag Berlin Heidelberg and Società Italiana di Fisica 2013
1.Department of Physics, Center of Advanced Studies, Faculty of ScienceBanaras Hindu UniversityVaranasiIndia
2.DST Center for Interdisciplinary Mathematical Sciences, Faculty of ScienceBanaras Hindu UniversityVaranasiIndia
Kumar, R. & Malik, R.P. Eur. Phys. J. C (2013) 73: 2514. https://doi.org/10.1140/epjc/s10052-013-2514-7
Revised 15 July 2013
First Online 08 August 2013
DOI https://doi.org/10.1140/epjc/s10052-013-2514-7
EPJC is an open-access journal funded by SCOAP3 and licensed under CC BY 4.0 | CommonCrawl |
Comparison of durability of treated wood using stake tests and survival analysis
Ikuo Momohara ORCID: orcid.org/0000-0001-9655-73371,
Haruko Sakai2 &
Yuji Kubo3
The stake test is widely used to evaluate the efficacy of wood preservatives. This test monitors the deterioration level observed in treated stakes partially inserted into the ground. The results are conventionally expressed as the relationship between deterioration levels and exposure periods. The preservative efficacy is compared among the stake groups treated with different retention levels based on the test results; however, there is no scientific basis for the comparison. We applied survival analysis to the conventional stake test to include a scientific basis to the test. Stakes impregnated with different types and retention levels of preservatives were subjected to deterioration at two test sites for approximately 30 years. The deterioration levels were monitored according to the conventional procedure and survival analysis was applied to the monitored data. Kaplan–Meier plots of the survival probabilities against the exposure periods indicated that there is a significant difference between the durability of the stakes treated with alkylammonium chloride (AAC-1) at K2 and K3 retention levels, whereas no significant difference was observed between those at K3 and K4 retention levels. Contrastingly, emulsified copper naphthenate (NCU-E) was found to be a reliable preservative, and the stakes impregnated with NCU-E showed a significant increase in durability in accordance with preservative retention. Alkaline copper quaternary (ACQ-1) also appeared to be a reliable preservative; however, the increase in stake durability after ACQ-1 treatment differed between the test sites. These results were verified using the modified Gehan–Breslow–Wilcoxon test with Holm's p adjusting method.
Wood is a material that mitigates climate change [1,2,3]; hence, many trials have been conducted to prolong its durability by increasing the service life of wooden materials. Although chemical or thermal modification processes have been recently adopted [4, 5], preservative impregnation by a vacuum-pressure process has been the most common method for increasing the durability of wooden materials [6,7,8,9].
Preservatives used for this process have been developed over several decades. For example, 30 years ago, chromated copper arsenate (CCA) was widely used for sill members in Japanese houses. However, 20 years ago, CCA was completely replaced with safer preservatives, such as alkylammonium chloride (AAC-1), emulsified copper naphthenate (NCU-E), emulsified zinc naphthenate (NZN-E), and alkaline copper quaternary (ACQ-1) [10]. During the development of safe preservatives, the relationship between the durability of the treated wood and retention of impregnated preservatives was investigated. Our previous paper showed that the apparent mean service lives of the stakes impregnated with AAC-1 appeared to increase with increasing preservative retention at a low retention range, whereas they appeared saturated at a high retention range [11]. In contrast to this pattern, the stakes impregnated with ACQ-1 showed a trend that increased retention resulted in an increase in durability throughout the retention range [12]. These studies showed an association between durability and preservative retention; however, the difference in durability at different retention levels was not estimated effectively.
The reason for improper estimation was considered to be the lack of scientific rigor in the conventional stake test. Therefore, we improved the conventional stake test by including survival analysis and demonstrated that a combination of the conventional stake test and survival analysis could successfully determine the difference in durability of untreated wood with scientific rigor [13]. Here, we present the application of survival analysis to the stake test data and discuss the effect of preservative retention on the durability of treated stakes.
Preparation of untreated stakes
Stakes of Japanese cedar (Cryptomeria japonica) sapwood were prepared from green logs separately by the Nara Forest Research Center and Koshii Preserving. The dimensions of the stakes prepared by the Nara Center were 3 × 3 × 60 cm (L), whereas those prepared by Koshii Preserving were 3 × 3 × 35 cm (L). Western hemlock (Tsuga heterophylla) sapwood stakes were prepared by the Koshii Preserving, similar to the Koshii cedar stakes. The bottom end of the Koshii stakes was sharpened in the shape of a quadrangular pyramid, whereas no such processing was applied to the Nara stakes.
Preparation of treated stakes
The stakes were impregnated with wood preservatives that satisfied the quality specifications of JIS K1570:2013 [14]. The preservatives used were AAC-1, NCU-E, NZN-E, ACQ-1, and emulsified zinc versaticate (VZN-E) (Table 1). The preservatives were impregnated according to the JIS A 9002:2012 process [15] at the Nara Forest Research Center or Koshii Preserving. The treated stakes were dried naturally under roofs until they were exposed to wood-attacking organisms at stake test sites. Estimated retention was calculated from the concentrations of preservatives in a working solution and amounts of the working solution impregnated into the stakes to determine the performance classification of the stakes according to the Japanese Agricultural Standards (JAS) for sawn lumber [16]. The stake conditions and test sites are listed in Table 1. As an exception, the stake groups indicated with two asterisks in Table 1 were ranked in a higher performance class because these stake groups were designed to estimate the minimal performance of treated stakes impregnated with the preservatives according to the JAS criteria for the sawn lumber. A part of the untreated stakes was kept without impregnation as the control.
Table 1 Characteristics of stake group and their apparent service lives
Exposure to wood-attacking organisms
Stake tests were performed at the Nara or Ibaraki test sites (Table 2). The weather properties of the sites were similar, whereas the soil types of the sites were different, as mentioned in our previous paper [13]. The treated and untreated stakes were inserted into the ground at both test sites (Table 1). The insertion depth of all stakes was set to 30 cm, even though the overall stake lengths were 60 cm (the Nara site) and 35 cm (the Ibaraki site). Stake deterioration levels were evaluated annually according to the JIS K 1571:2010 criteria at the ground level [17]. The data obtained from the exposure for one to three decades were used for further analyses.
Table 2 Characteristics of the two test sites
Data analysis by the conventional method
Data analysis and service life determination were performed according to JIS K 1571:2010 [17]. When parts of the stakes were lost, the calculation was performed excluding the data of missing stakes.
Data analysis according to survival analysis
The service life of each stake was designated as the year when the deterioration level of the individual stake reached 2.5 [13], which was calculated according to Eq. (1):
$${\text{YSL}} = Y1 + \frac{2.5 - DL1}{{DL2 - DL1}},$$
where YSL is the service life of a stake, Y1 is the last year in which the deterioration level of the stake was below 2.5, DL1 is the deterioration level of the stake observed at Y1, and DL2 is the deterioration level of the stake observed one year after Y1 [13].
Individual service life data were collected and used for survival analyses. Survival analysis was performed using R software (ver. 4.0.4) with the "survival" and "survminer" packages [18]. Significant differences were determined using the Peto & Peto modification of the Gehan–Breslow–Wilcoxon test with Holm's p adjustment method [19] (p < 0.05).
Results of conventional stake tests
The characteristics of the stake groups and service lives of the stakes are listed in Table 1. The results of the field test for most of the stake groups have been reported in our previous papers [11, 12, 20, 21].
Before discussing the results of the stake test, it is necessary to highlight the following two points. First, this study included preliminary data, and the estimated retention of some stake groups, such as AAC-1, whose estimated retention was 4.3 kg/m3, was set just below the JAS requirement. In this study, the performance classification of these stake groups was ranked in the next higher class because these stake groups were prepared to estimate the minimal performance of treated wood for each JAS criterion. The stakes with two asterisks in Table 1 are the stakes that were ranked in a higher class of the JAS criterion. Second, AAC-1 stakes with a single asterisk (Table 1) were impregnated with AAC-1 containing 1% phoxim or 1% chlorpyrifos in their working solution. However, service lives appeared similar to those of the other AAC-1. As we concluded that termiticide did not influence the service lives of the stakes, further analysis was performed assuming that all stake groups impregnated with AAC-1 were of the same quality despite the contamination of phoxim or chlorpyrifos.
As shown in Table 1, the results of the conventional test indicate that the service lives of the stakes impregnated with AAC-1 varied from 4.5 to 12 years. The relationship between the estimated retention and service life is unclear. NZN-E and VZN-E also showed a similar tendency, with no clear relationship between the estimated retention and service life. Contrary to these preservatives, the stakes impregnated with ACQ-1 or NCU-E appeared to be affected by the estimated retention level. The estimated ACQ-1 retention increased from 2.8 to 4.5 kg/m3 at the Nara site; the service life increased from 15 to 22 years. The same effect was observed in the stakes impregnated with NCU-E. Regarding the influence of wood species, the Japanese cedar stakes appeared to perform better than the western hemlock stakes when treated with ACQ-1 at a level higher than the K3 criteria of JAS.
Results of survival analysis
As discussed in our previous paper [13], the service life determined by the conventional method is simple and useful; however, mathematical ambiguity remains in the calculation process. Because there is a misuse of the ordinal scale as a proportional scale, it is mathematically inaccurate to calculate the mean of the deterioration levels collected by annual observation using the ordinal scale. Therefore, it is also incorrect to discuss the differences in durability from the apparent mean values calculated by the conventional procedure. To overcome this inaccuracy, we developed a conventional stake test method with the addition of survival analysis and demonstrated that the novel method is useful for comparing durability among groups of untreated stakes [13]. Survival analysis has another advantage in that it can handle missing data. Some stakes were accidently lost during the long exposure period. In other cases, some stakes were collected, for example, to check for retention midway through the test. In such cases, survival analysis can treat the lost data as censored data and draw a survival curve with mathematical robustness [22].
Here, we apply a novel method to the conventional stake test data of the stakes treated with different preservative retention levels and discuss the significant differences among durability of the stakes treated with different preservatives and retention levels.
The Kaplan–Meier method was applied to compare the durability of stakes treated with wood preservatives, for which the selection of the event was important. The event was defined as the time at which the stake deterioration level was 2.5 [13].
Comparison of AAC-1 stakes of different performance classification
The Kaplan–Meier curves for the stakes impregnated with AAC-1 are shown in Fig. 1. The Y-axis indicates the survival probability, which is the ratio of stakes that did not reach the deterioration level of 2.5. The difference in the lines indicates the difference in the performance classification shown in Table 1. Among stakes impregnated with AAC-K2, a deterioration level of 2.5 was first reached after a 3.5-year exposure, and half of the stakes reached a deterioration level of 2.5 in 7 years. In the case of AAC-K3 and AAC-K4, deterioration of the first stake appeared after 3.5 and 6.3 years of exposure, respectively, and it took 8.5 and 10 years, respectively, for half of the stakes to reach the deterioration level of 2.5. Tick marks on the K2 and K3 data after the 11-year exposure indicate that some stakes were lost or removed before they reached the deterioration level of 2.5.
Kaplan–Meier curves for stakes impregnated with AAC-1 at different performance classifications. Information on the stakes is shown in Table 1. AAC-1 alkylammonium chloride
Multiple comparisons revealed a significant difference between the durability of AAC-K2 and AAC-K3 and that between AAC-K2 and AAC-K4 (Table 3). Contrastingly, no significant differences were observed between AAC-K3 and AAC-K4. The fact that some AAC-K4 contained high retention values, suggests that no significant difference can be observed between commercial AAC-K3 and AAC-K4 lumbers because lumber manufacturers impregnate wood preservatives at retention levels just above the minimum value required for each performance class. The AAC-K4 lumber appears unsuitable for use in severe ground contact conditions.
Table 3 Adjusted p value between each stake group treated with AAC-1
Comparison of emulsified preservatives
NCU-E, NZN-E, and VZN-E consist of oil-soluble preservatives, surfactants, and water. To investigate performance of NCU-E at different retention levels, the Kaplan–Meier curves for the stakes containing NCU-E at K2–K4 performance classes were plotted (Fig. 2). The graph indicates that the increase in retention increased the service lives because the first stake that reached the deterioration level of 2.5 appeared in ascending order of the performance classification. Additionally, the exposure year when half the stakes reached the deterioration level of 2.5 increased with the increase in performance classifications. Multiple comparisons performed to confirm these observations revealed that there is a significant difference among the service lives of the stakes treated with NCU-E at different performance classifications (Table 4). The survival probability of NCU-E stakes, especially that of the K3 and K4 performance classes, decreased drastically in the late stages of the Kaplan–Meier curves. The downward trend observed in NCU-K3 and NCU-K4 was not similar to that observed in the stakes treated with AAC-1 (Fig. 1).
Kaplan–Meier curves for stakes impregnated with NCU-E at different performance classifications. Information on the stakes is shown in Table 1. NCU-E emulsified copper naphthenate
Table 4 Adjusted p value between each stake group treated with NCU-E
Figure 3 shows the effects of copper and zinc on the service life of the stake. The survival probability of NCU-K3 is displayed by a higher line than that of VZN-K3 after 5 years of exposure, which indicates that NCU-K3 is more durable than VZN-K3, although both stakes were considered as having the same performance classification. Additionally, NCU-K2 was more durable than VZN-K2 or NZN-K2. The difference in durability between NCU-E and the other emulsified preservatives was shown by multiple comparisons (Table 5). These results reveal that NCU-E is the most reliable preservative among the three emulsified formulations used for ground contact conditions which is in good agreement with those reported by Woodward et al. [23] who found that copper naphthenate provided greater protection than zinc naphthenate with similar retention levels in a stake test in Mississippi.
Kaplan–Meier curves for stakes impregnated with emulsified preservative. Left: stakes impregnated with preservatives at K3 performance classification. Right: stakes impregnated with preservatives at K2 performance classification. Information on the stakes in each performance classification is shown in Table 1
Table 5 Adjusted p value between each stake group treated with emulsified preservatives
Comparison of ACQ-1-impregnated stakes at different retention levels
The effect of ACQ-1 retention on stake durability is shown in Fig. 4. The test was performed at the Nara site and the performance classification of all stakes was set to K3 and the retention levels of these stakes varied from 2.8 to 4.5 kg/m3. As shown in the figure, an increase in ACQ-1 retention appeared to increase stake durability, which was confirmed by multiple comparison analysis (Table 6). Contrary to AAC-1-impregnated stakes (Fig. 1; Table 3), stakes impregnated with ACQ-1 showed a significant increase in durability with an increase in ACQ-1 retention.
Kaplan–Meier curves for stakes impregnated with ACQ-1 at different retention at the Nara site. Information on the stakes in each performance classification is shown in Table 1. ACQ-1 alkaline copper quaternary
Table 6 Adjusted p value between each stake group treated with ACQ-1
Comparison of ACQ-1-impregnated stakes of different species with different retention levels
To check whether ACQ-1 retention level correlates with the durability of the stakes impregnated with ACQ-1, a similar test was performed at the Ibaraki site. Figure 5 shows the survival probabilities of Japanese cedar and Western hemlock stakes impregnated with ACQ-1. It is worth mentioning that the length of stakes used at this site was different from that of the stakes used at the Nara site. The stakes at the Ibaraki site were 35 cm long, whereas those at the Nara site were 60 cm long. As a result, the retention at the ground level must have been higher at the Ibaraki site than at the Nara site. The soil at the Nara site was damp compared to that of the Ibaraki site because the soil types at the Nara and Ibaraki sites are Gleysol and Andosol, respectively. These factors probably affected the difference in the deterioration rates at the two test sites [13, 24].
Kaplan–Meier curves for stakes impregnated with ACQ-1 at the Ibaraki site. Left: Japanese cedar stakes. Right: western hemlock stakes. Information on the stakes in each performance classification is shown in Table 1. ACQ-1 alkaline copper quaternary
As shown in the figure, the results obtained from the Ibaraki site are unclear because the exposure period is too short to estimate stake durability, especially at the high ACQ-1 retention level. The test for significance also suggests that no significant differences were observed for stake durability at high ACQ-1 retention (Table 7). A further exposure period is necessary for the precise estimation of the durability of stakes treated with ACQ-1 of performance classes K3 and K4. Contrastingly, the stakes of performance class K2 showed significantly lower durability than stakes in the K3 and K4 performance classes.
Table 7 Adjusted p value between stakes treated with ACQ-1 in different performance classifications
The efficacy of ACQ-1 impregnation for different species was also comparable (Fig. 5). The Kaplan–Meier plots of the Japanese cedar stakes and the Western hemlock stakes impregnated with ACQ-1 at three retention levels showed similar curves. The p values of the Peto & Peto modification of the Gehan–Breslow–Wilcoxon test for K2, K3, and K4 stakes were 0.69, 0.89, and 0.26, respectively, which suggests that there is no significant difference between the durability of the two species impregnated with ACQ-1. In other words, ACQ-1 adds similar durability to both the Japanese cedar and the Western hemlock stakes.
We applied a survival analysis to the conventional stake test to compare the efficacy of preservatives.
The Kaplan–Meier curve was useful to estimate the efficacy of the preservative at different retention levels.
The durability of treated stakes at different retentions could be compared using the modified Gehan–Breslow–Wilcoxon test with Holm's p adjusting method with scientific robustness.
The test for significance revealed that:
The durability of stakes impregnated with AAC-1 increased with the increase in AAC-1 retention from K2 to K3; however, the difference was not significant from K3 to K4.
NCU-E was a more reliable preservative than the other emulsified preservatives containing zinc. The durability of stakes impregnated with NCU-E increased in accordance with the increase in NCU-E retention.
ACQ-1 retention level significantly affected the durability of the ACQ-1-impregnated stakes at the Nara test site. This correlation was partly observed at the Ibaraki site. A further exposure period is necessary to compare the durability of the stakes impregnated with ACQ-1 at the K3 and K4 classes at the Ibaraki site.
Our present data suggest that ACQ-1 has similar efficacies in both Japanese cedar and Western hemlock.
The data sets analyzed during the current study are available from the corresponding author upon reasonable request.
CCA:
Chromated copper arsenate
AAC-1:
Alkylammonium chloride
NCU-E:
Emulsified copper naphthenate
NZN-E:
Emulsified zinc naphthenate
VZN-E:
Emulsified zinc versaticate
ACQ-1:
Alkaline copper quaternary
JIS:
Japanese Industry Standards
JAS:
Japanese Agricultural Standards
Tsunetsugu Y, Tonosaki M (2010) Quantitative estimation of carbon removal effects due to wood utilization up to 2050 in Japan: effects from carbon storage and substitution of fossil fuels by harvested wood products. J Wood Sci 56:339–344
Soimakallio S, Saikku L, Valsta L, Pingoud K (2016) Climate change mitigation challenge for wood utilization the case of Finland. Environ Sci Technol 50(10):5127–5134
Leskinen P, Cardellini G, González-García S, Hurmekoski E, Sathre R, Seppälä J, Smyth C, Stern T, Verkerk PJ (2018) Substitution effects of wood-based products in climate change mitigation. From Science to Policy 7. Eur Forest Inst. https://doi.org/10.36333/fs07
Rowell RM (2014) Acetylation of wood—a review. Int J Lignocellulosic Prod 1:1–27
Esteves BM, Pereira HM (2009) Wood modification by heat treatment: a review. BioResources 4:370–404
Preston AF (2000) Wood preservation: trends of today that will influence the industry tomorrow. Forest Prod J 50(9):12–19
Richardson BA (2002) Wood preservation. E& FN Spon, London
Lebow ST (2010) Wood preservation. In: Forest Products Laboratory (ed) Wood handbook (Centennial Edition), 15th edn. Madison, Department of Agriculture Forest Service
Salminen E, Valo R, Korhonen M, Jernlås R (2014) Wood preservation with chemicals: best available techniques (BAT). https://www.ippc.int/static/media/files/publications/en/2014/09/04/17_ewgwoodhandicrafts_2014_sep.pdf. Accessed 1 Apr 2021.
Momohara I (2018) Diversification of wood preservatives and their treatment process—present and future. Wood Prot 44:176–179
Sakai H, Iwamoto Y, Nakamura Y (2008) Progress on damage and service life of wood stakes impregnated with IPBC or DDAC. Wood Preserv 34:112–118
Sakai H, Yasugi R, Iwamoto Y, Masuda K (2019) Progress on damage of wood stakes impregnated with ACQ revealed by observation for 25 years. Bull Nara Forest Res Inst 48:43–48
Momohara I, Sakai H, Kurisaki H, Ohmura W, Kakutani T, Sekizawa T, Imamura Y (2021) Comparison of natural durability of wood by stake tests followed by survival analysis. J Wood Sci 67:44
JIS K 1570 (2010) Wood preservatives. Japanese Industry Standards Committee, Japan
JIS A 9002 (2012) Preservative treatments of wood products by pressure processes. Japanese Industry Standards Committee, Japan
JAS 1083 (2019) Sawn lumber. Ministry of Agriculture, Forestry and Fisheries, Japan
JIS K 1571 (2010) Test methods for determining the effectiveness of wood preservatives and their performance requirements. Japanese Industry Standards Committee, Japan
R Core Team (2016). R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/. Accessed 1 Aug 2021
Therneau TM, Lumley T, Elizabeth A, Cynthia C (2021) Package 'survival'. https://cran.r-project.org/web/packages/survival/survival.pdf. Accessed 1 Apr 2021
Sakai H, Iwamoto Y, Nakamura Y (2001) Observation of stake impregnated with wood preservative containing copper or zinc as an active ingredient. Wood Preserv 17:114–120
Kubo Y, Maeda K, Matsunaga H, Ohmura W, Momohara I (2012) Field test of stakes impregnated with ACQ. Abstracts of the 62th annual meeting of The Japan Wood Research Society. Hokkaido University, Sapporo
Kleinbaum DG, Klein M (2012) Survival analysis: a self-learning text, 3rd edn. Springer, New York
Woodward BM, Hatfield CA, Lebow ST (2011) Comparison of wood preservatives in stake tests: 2011 progress report. https://www.fpl.fs.fed.us/documnts/fplrn/fpl_rn02.pdf. Accessed 15 Aug 2021
Ishiyama N, Horisawa S, Hara T, Yoshida M, Momohara I (2021) Microbiological community structure on logs used for groynes in a riverbank system. J Wood Sci 67:11
We are grateful to the staff of the Forestry and Forest Products Research Institute (Ibaraki site) and Nara Forest Research Institute (Nara site) for their help in managing the test sites.
Kansai Research Center, Forestry and Forest Products Research Institute, Fushimi, Kyoto, 612-0855, Japan
Ikuo Momohara
Nara Forest Research Institute, Takatori, Takaichi-gun, Nara, 635-0133, Japan
Haruko Sakai
Koshii Preserving Co., Ltd., Suminoeku, Osaka, 559-0026, Japan
Yuji Kubo
All three authors contributed equally to this manuscript. HS was responsible for the stake test at the Nara test site. YK was responsible for the stake test of the ACQ-impregnated stakes at the Ibaraki test site. IM contributed to the survival analysis of the data from HS and YK. IM mainly wrote the manuscript and the other authors revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Ikuo Momohara.
Momohara, I., Sakai, H. & Kubo, Y. Comparison of durability of treated wood using stake tests and survival analysis. J Wood Sci 67, 63 (2021). https://doi.org/10.1186/s10086-021-01996-2
Stake test
Field test
Wood preservative | CommonCrawl |
Works by Kazuyuki Tanaka
( view other items matching `Kazuyuki Tanaka`, view all matches )
Listing dateFirst authorImpactPub yearRelevanceDownloads Order
1 filter applied
BibTeX / EndNote / RIS / etc
Export this page: Choose a format.. Formatted textPlain textBibTeXZoteroEndNoteReference Manager
Limit to items.
pro authors only
open access only
published only
Configure languages here. Sign in to use this feature.
categorization shortcuts
open articles in new windows
Open Category Editor
[Image] -Determinacy, Comprehension and Induction.Medyahya Ould Medsalem & Kazuyuki Tanaka - 2007 - Journal of Symbolic Logic 72 (2):452 - 462.details
We show that each of $\Delta _{3}^{1}-{\rm CA}_{0}+\Sigma _{3}^{1}-{\rm IND}$ and $\Pi _{2}^{1}-{\rm CA}_{0}+\Pi _{3}^{1}-{\rm TI}$ proves $\Delta _{3}^{0}-{\rm Det}$ and that neither $\Sigma _{3}^{1}-{\rm IND}$ nor $\Pi _{3}^{1}-{\rm TI}$ can be dropped. We also show that neither $\Delta _{3}^{1}-{\rm CA}_{0}+\Sigma _{\infty}^{1}-{\rm IND}$ nor $\Pi _{2}^{1}-{\rm CA}_{0}+\Pi _{\infty}^{1}-{\rm TI}$ proves $\Sigma _{3}^{0}-{\rm Det}$. Moreover, we prove that none of $\Delta _{2}^{1}-{\rm CA}_{0}$, $\Sigma _{3}^{1}-{\rm IND}$ and $\Pi _{2}^{1}-{\rm TI}$ is provable in $\Delta _{1}^{1}-{\rm Det}_{0}={\rm ACA}_{0}+\Delta _{1}^{1}-{\rm Det}$.
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Model Theory in Logic and Philosophy of Logic
Direct download (5 more)
Bookmark 12 citations
Δ 0 3 -Determinacy, Comprehension and Induction.MedYahya Ould MedSalem & Kazuyuki Tanaka - 2007 - Journal of Symbolic Logic 72 (2):452-462.details
We show that each of Δ13-CA0 + Σ13-IND and Π12-CA0 + Π13-TI proves Δ03-Det and that neither Σ31-IND nor Π13-TI can be dropped. We also show that neither Δ13-CA0 + Σ1∞-IND nor Π12-CA0 + Π1∞-TI proves Σ03-Det. Moreover, we prove that none of Δ21-CA0, Σ31-IND and Π21-TI is provable in Δ11-Det0 = ACA0 + Δ11-Det.
Weak Axioms of Determinacy and Subsystems of Analysis II.Kazuyuki Tanaka - 1991 - Annals of Pure and Applied Logic 52 (1-2):181-193.details
In [10], we have shown that the statement that all ∑ 1 1 partitions are Ramsey is deducible over ATR 0 from the axiom of ∑ 1 1 monotone inductive definition,but the reversal needs П 1 1 - CA 0 rather than ATR 0 . By contrast, we show in this paper that the statement that all ∑ 0 2 games are determinate is also deducible over ATR 0 from the axiom of ∑ 1 1 monotone inductive definition, but the (...) reversal is provable even in ACA 0 . These results illuminate the substantial differences among lightface theorems which can not be observed in boldface. (shrink)
Fixed Point Theory in Weak Second-Order Arithmetic.Naoki Shioji & Kazuyuki Tanaka - 1990 - Annals of Pure and Applied Logic 47 (2):167-188.details
Weak Axioms of Determinacy and Subsystems of Analysis I: Δ20 Games.Kazuyuki Tanaka - 1990 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 36 (6):481-491.details
Areas of Mathematics in Philosophy of Mathematics
Bookmark 8 citations
On Formalization of Model-Theoretic Proofs of Gödel's Theorems.Makoto Kikuchi & Kazuyuki Tanaka - 1994 - Notre Dame Journal of Formal Logic 35 (3):403-412.details
Within a weak subsystem of second-order arithmetic , that is -conservative over , we reformulate Kreisel's proof of the Second Incompleteness Theorem and Boolos' proof of the First Incompleteness Theorem.
Non‐Standard Analysis in WKL0.Kazuyuki Tanaka - 1997 - Mathematical Logic Quarterly 43 (3):396-400.details
Within a weak subsystem of second-order arithmetic WKL0, we develop basic part of non-standard analysis up to the Peano existence theorem.
Some Conservation Results on Weak König's Lemma.Stephen G. Simpson, Kazuyuki Tanaka & Takeshi Yamazaki - 2002 - Annals of Pure and Applied Logic 118 (1-2):87-114.details
By , we denote the system of second-order arithmetic based on recursive comprehension axioms and Σ10 induction. is defined to be plus weak König's lemma: every infinite tree of sequences of 0's and 1's has an infinite path. In this paper, we first show that for any countable model M of , there exists a countable model M′ of whose first-order part is the same as that of M, and whose second-order part consists of the M-recursive sets and sets not (...) in the second-order part of M. By combining this fact with a certain forcing argument over universal trees, we obtain the following result : if proves X!Y with arithmetical, so does . We also discuss several improvements of this results. (shrink)
The Galvin-Prikry Theorem and Set Existen Axioms.Kazuyuki Tanaka - 1989 - Annals of Pure and Applied Logic 42 (1):81-104.details
A Non-Standard Construction of Haar Measure and Weak König's Lemma.Kazuyuki Tanaka & Takeshi Yamazaki - 2000 - Journal of Symbolic Logic 65 (1):173-186.details
In this paper, we show within RCA 0 that weak Konig's lemma is necessary and sufficient to prove that any (separable) compact group has a Haar measure. Within WKL 0 , a Haar measure is constructed by a non-standard method based on a fact that every countable non-standard model of WKL 0 has a proper initial part isomorphic to itself [10].
The Strong Soundness Theorem for Real Closed Fields and Hilbert's Nullstellensatz in Second Order Arithmetic.Nobuyuki Sakamoto & Kazuyuki Tanaka - 2004 - Archive for Mathematical Logic 43 (3):337-349.details
By RCA 0 , we denote a subsystem of second order arithmetic based on Δ0 1 comprehension and Δ0 1 induction. We show within this system that the real number system R satisfies all the theorems (possibly with non-standard length) of the theory of real closed fields under an appropriate truth definition. This enables us to develop linear algebra and polynomial ring theory over real and complex numbers, so that we particularly obtain Hilbert's Nullstellensatz in RCA 0.
Bookmark 1 citation
A Game-Theoretic Proof of Analytic Ramsey Theorem.Kazuyuki Tanaka - 1992 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 38 (1):301-304.details
Proof Theory in Logic and Philosophy of Logic
Infinite Games in the Cantor Space and Subsystems of Second Order Arithmetic.Takako Nemoto, MedYahya Ould MedSalem & Kazuyuki Tanaka - 2007 - Mathematical Logic Quarterly 53 (3):226-236.details
In this paper we study the determinacy strength of infinite games in the Cantor space and compare them with their counterparts in the Baire space. We show the following theorems:1. RCA0 ⊢ equation image-Det* ↔ equation image-Det* ↔ WKL0.2. RCA0 ⊢ 2-Det* ↔ ACA0.3. RCA0 ⊢ equation image-Det* ↔ equation image-Det* ↔ equation image-Det ↔ equation image-Det ↔ ATR0.4. For 1 < k < ω, RCA0 ⊢ k-Det* ↔ k –1-Det.5. RCA0 ⊢ equation image-Det* ↔ equation image-Det.Here, Det* stands for (...) the determinacy of infinite games in the Cantor space, and k is the collection of formulas built from equation image formulas by applying the difference operator k – 1 times. (shrink)
A Game‐Theoretic Proof of Analytic Ramsey Theorem.Kazuyuki Tanaka - 1992 - Mathematical Logic Quarterly 38 (1):301-304.details
We give a simple game-theoretic proof of Silver's theorem that every analytic set is Ramsey. A set P of subsets of ω is called Ramsey if there exists an infinite set H such that either all infinite subsets of H are in P or all out of P. Our proof clarifies a strong connection between the Ramsey property of partitions and the determinacy of infinite games.
Weak Axioms of Determinacy and Subsystems of Analysis I: Δmath Image Games.Kazuyuki Tanaka - 1990 - Mathematical Logic Quarterly 36 (6):481-491.details
Statistical Analysis of the Expectation-Maximization Algorithm with Loopy Belief Propagation in Bayesian Image Modeling.Shun Kataoka, Muneki Yasuda, Kazuyuki Tanaka & D. M. Titterington - 2012 - Philosophical Magazine 92 (1-3):50-63.details
Bayesian Reasoning in Philosophy of Probability
Maximum Marginal Likelihood Estimation and Constrained Optimization in Image Restoration.Kazuyuki Tanaka - 2001 - 人工知能学会論文誌: Transactions of the Japanese Society for Artificial Intelligence 16:246-258.details
Applications of Probability in Philosophy of Probability
Maximum Marginal Likelihood Estimation and Constrained Optimization in Image Restoration.Kazuyuki Tanaka - 2001 - Transactions of the Japanese Society for Artificial Intelligence 16:246-258.details
TAP Equation for Non-Negative Boltzmann Machine.Muneki Yasuda & Kazuyuki Tanaka - 2012 - Philosophical Magazine 92 (1-3):192-209.details
Thermodynamics and Statistical Mechanics in Philosophy of Physical Science
Infinite Games and Transfinite Recursion of Multiple Inductive Definitions.Keisuke Yoshii & Kazuyuki Tanaka - 2012 - In S. Barry Cooper (ed.), How the World Computes. pp. 374--383.details
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it: | CommonCrawl |
Connectivity knowledge and the degree of structural formalization: a contribution to a contingency theory of organizational capability
Rogerio S. Victer ORCID: orcid.org/0000-0002-9183-38111
The objective of this study is to develop a contingency theory of organizational capability based on the identification of decision variables relevant to the design of firms. The paper supports a model in which superior performance is the result of the proper fit between applied knowledge and organizational structure. More specifically, the study shows that the degree of structural formalization adopted by an organization reflects how knowledge controls the flow of action. The study identifies a functionally distinctive type of knowledge used to regulate the temporal order of tasks called connectivity knowledge. The influence of connectivity knowledge on the degree of organizational formalization is empirically tested on data collected in the healthcare sector. Applying a longitudinal logistic regression model on a dataset of 105 hospitals located in New York and New Jersey, this paper measures and compares the odds of key therapeutic tasks being provided by formalized hospital arrangements in which physicians work as employees instead of as autonomous professionals. Empirical results provide preliminary support to the core hypothesis correlating the volume of connectivity knowledge applied in therapeutic services to the degree of structural formalization adopted by a hospital.
Strategic management has experienced considerable change in recent decades (Mahoney and McGahan 2007; Nerur et al. 2008). Empirical evidence showing that firms are capable of sustaining advantage in competitive markets has produced various theories focused on firm-specific sources of superior performance (Ramos-Rodrigues and Ruiz-Navarro 2004). Long preoccupied with competitive analysis, strategists are becoming increasingly interested in the role played by organizational capability in generating performance heterogeneity. Organizational capability broadly refers to the ability of firms to coordinate value-adding jobs (Dosi et al. 2000). From this perspective, successful firms are not only those jockeying for market positions but also those capable of applying idiosyncratic expertise that is difficult to transfer across organizational boundaries (Zander and Kogut 1995). A capability theory applied to the strategic management field seeks primarily to explain how organizations outperform each other by adopting more efficient and effective value-creating activities (Cockburn et al. 2000).
The contingency perspective is useful in refining the capability theory of the firm (CTF). A contingency model of organizational capability is particularly useful in identifying decision variables pertinent to the proper design of firms (Burton and Obel 2004). The recurrent characteristic of the contingency approach to organizational theory is to reject one best way to organize firms and suggest different alternatives according to the circumstances. The theory advocates that organizational effectiveness results from fitting characteristics of the organization to key selected factors related to particular challenges faced by the organization (Donaldson 2001). The contingency perspective of the firm has been reinvigorated in the field of strategic management by the work of Birkinshaw et al. (2002), which emphasizes the relationship between knowledge characteristics and organizational structure within the context of multinational operations interested in diffusing practices. Nickerson and Zenger (2004) also emphasized the role played by knowledge formation in the selection of organizational design based on the characteristic of search for solutions. More recently, Burton et al. (2015) developed a multi-contingency model in which information and knowledge are core contingency factors influencing the design of organizations. The purpose of the present study is to extend this line of inquiry to new arenas of conceptual and empirical development.
In contrast to previous works on the knowledge-based contingency theory, the focus here will not be on searching for solutions or diffusion of practices but on the application of knowledge. All of these other perspectives on knowledge are useful in examining the effectiveness of organizations and constitute subsections of a broader management field of investigation (Conner and Prahalad 1996). Nevertheless, they do not cover all the issues pertinent to the economics of knowledge management. Knowledge creation facilitates organizational flexibility and adaptation (Nonaka and Takeuchi 1995), whereas knowledge transfer facilitates organization growth and expansion (Zander and Kogut 1995). We also need to fully understand the process of knowledge application, which is not as trivial a process as usually assumed in strategic management. Organizations skilled in using knowledge for diverse economic reasons—either in manufacturing or service industries—still may struggle to adapt already existing knowledge to new problems. Knowledge, as any other resource, needs to be adequately processed and organized in order to generate valuable outcomes. Simply creating and diffusing knowledge is not sufficient for generating value to end consumers (Becerra-Fernandez and Sabherwal 2001; Pertusa-Ortega et al. 2010).
The main argument of the paper is as follows: One of the most relevant contingency factors to consider in the design of organizations is the knowledge required to structure productive tasks. Knowledge is a special resource for problem-solving activities and it performs a strategic role in allowing firms to create valuable products or services according to the industry of operations. Knowledge is applied for specific purposes in conducting complex and uncertain jobs. The volume of knowledge required to solve difficult problems generates recurrent organizational challenges regarding how to best develop and deploy cognitive capabilities. Choosing the ideal structural form for a firm is a strategic decision because it deals with the challenge of selecting how to apply knowledge to create value (Mintzberg 1979; Lam 2000). For instance, firms have diverse options for allocating cognitive capabilities according to how jobs are designed and implemented. This is not only a technical problem but also an important managerial one given that knowledge application processes also involve decisions about how authority is distributed across job positions, how responsibility is allocated within performing teams, how rules of conduct are established, how tasks are coordinated in time, and how performance is monitored and rewarded (Burton and Obel 2004). We claim that superior organizational capabilities are the result of the proper fit between applied knowledge and organizational structure.
In the next section, we discuss the contributions and shortcomings of the received capability theory of the firm (CTF) and suggest how it can potentially benefit from a knowledge-based contingency perspective. Then we position the current research project within the tradition of the contingency theory of organizations and promote a model in which a special type of knowledge (i.e., connectivity knowledge) informs the organizational design. In subsequent sections, we fully conceptualize the model and test it empirically based on data collected in the healthcare industry. Empirical results provide preliminary support for the core hypothesis correlating the volume of connectivity knowledge applied in therapeutic services to the degree of structural formalization adopted by a hospital. The final section of the paper is dedicated to discussing results and suggesting future developments in the research project.
The contingency approach applied to CTF
The objective of the current paper is to advance rather than to replace the existing capability theory of the firm (CTF). CTF was born from the idea that firms play a functional role in the economy. Firms are capable of performing sophisticated tasks such as building automobiles or computers, or flying us from one continent to another (Dosi et al. 2000). Firms conduct a variety of transformative processes that provide valuable goods and services to society (Nelson and Winter 1982). They are capable of absorbing knowledge and rearranging it in the form of different goods and services. Knowledge is specifically relevant in sustaining organizational capability due to its role in conditioning how other resources are applied (Winter 1998). Knowledge is a key input factor given that it enables the firm to transform other inputs into valuable outputs (Arrow and Hahn 1971; Nelson and Winter 1982). Superior firms are those capable of adapting their organizational structure to the type of knowledge required to perform value-adding tasks. In our model, the knowledge content is less relevant than how pieces of knowledge are configured and interconnected with each other as they are applied to problem-solving tasks.
CTF emphasizes the corporate function of knowledge management. It relies on the assumption that managerial processes are essential for improving organizational performance in the face of increasingly difficult problems. Its main arguments can be summarized as follows: Knowledge increases organizational effectiveness when it is less dependent on individuals and supported by routines (Winter 2003). From the perspective of CTF, tight vertical integration is usually the preferred method for tackling the demands of complex activities and uncertain outcomes. Knowledge management benefits from the application of protocols and shared language that facilitate coordination (Grant 1996; Kogut and Zander 1992; Monteverde 1995; Moran and Ghoshal 1996). Essentially, this model points out that hierarchical systems are better for knowledge management than spontaneous networks of individual workers (Nickerson and Zenger 2004). From the perspective of traditional CTF, the centralized organization is usually preferable to the decentralized one, given that the latter tends to be inefficient in tasks requiring the creation, application, and/or transfer of sophisticated knowledge. Reducing internal costs generated by coordinating knowledge across units tends to make centralized structures more prone to generate knowledge with larger and broader impact (Argyres and Silverman 2004).
While CFT provides a useful generalization, here we highlight the need to moderate its core tenets in order to minimize the limitations and constraints of a monolithic theory of organizational structure. There are situations in which the benefits of the centralized structure are minimized or become too costly to justify a long-term adoption. Acknowledging that certain structural solutions might not be appropriate for all types of knowledge management is vital to envisioning alternative approaches to efficient and effective problem solving. Ultimately, being able to customize and adapt the resource base to needs is what allows a firm to sustain its competitive advantage over time (Teece 1996). In addition, the ability to manage resources in a deliberate manner assists in balancing the costs of a capability and its value (Winter 2003).
We suggest a slightly (but fundamentally) different theoretic perspective on how to apply knowledge through alternative organizational configurations. The key is to identify core parameters that make either one or the other form of governance mode more conducive to effectiveness. It is true that decentralized organizations have difficulty in dealing with more complex coordination, but they also have relevant strengths in knowledge management. Decentralized organizations are superior to centralized ones in ways that might be strategic on certain occasions. They require fewer administrative expenses, allow higher levels of customization, and are more prone to organic adaptation, which make them potentially more efficient and even more effective than centralized organizations in performing certain tasks (James 2003). They are also better equipped to promote psychological empowerment (Mathieu et al. 2006) and foster positive emotional outcomes (Ryan and Deci 2000), which are essential components of a healthy and sustainable organizational climate. There is also evidence that decentralized structures encourage a more proximate search for knowledge, which promotes the development of in-depth capabilities (Argyres and Silverman 2004).
A contingency perspective of the organizational capability informs us that there is no best way to organize the application of knowledge, but many possible ones. For this reason, it is our purpose to convert CTF into a decision model (Burton et al. 2015). This means that it should not advocate either one or other organizational form as the best, but identify which one is the most appropriate according to the demands of the task. The features of the cognitive task are particularly relevant to our modeling purposes. We propose that the choice of the best organizational structure depends on the ability to manage increasing volumes of knowledge required to solve a difficult problem. With volume, there is an increasing need to combine, integrate, and amalgamate different parts of the relevant knowledge body, ultimately affecting how knowledge is organized for productive purposes. We believe that adopting a contingency perspective is useful to relativize the need for "more" centralization even in face of the evidence that more centralization usually is accompanied with a greater capability of knowledge application through the deployment of coordinating tasks. The contingency perspective highlights the need to exercise judgment in choosing the best organizational arrangement. More structured organizations generate benefits in knowledge management, but they also face additional sources of inefficiencies, expenses, and risks. Trade-off analysis is essential to any contingency approach.
One important step in overcoming this inherent shortcoming in the received CTF theory is to replace the core variable describing the degree of structuration of an organization. Instead of centralization, we should promote the notion of formalization as the main feature describing organizational architecture. Centralization refers to the location of decision-making rights, whereas formalization refers to the codification of decision-making processes (Burton et al. 2015). In many ways, these two organizational variables get confused because centralization tends to occur through formalized procedures that often standardize or reduce the discretion of decision-making at the level of the task (Pertusa-Ortega et al. 2010). However, this connection is not necessarily true in all conditions (Kim et al. 2003). Decision power is relevant to formalizational issues, but power can be distributed in multiple forms, meaning that it can be implemented in either centralized or decentralized ways (Nickerson and Zenger 2004). A good example of this phenomenon is the multidivisional structure that makes the firm more formalized, but also more decentralized at the same time (Chandler 1962). For instance, structural stability and flexibility can be combined in a reward system that favors cooperation across unit boundaries (Gold et al. 2001). Consequently, we suggest changing the emphasis on how organizational structure is depicted for the sake of knowledge application. Instead of focusing on a continuum of centralization, we prefer to deal with a continuum of formalization.
Formalization of the organization deals with issues related to well-defined jobs as well as the adoption of regulations, decision-making rules, and policy implementation. While informal organizations are usually decentralized (because decision rules derive from individual skills), more formalized organizations can be either centralized or decentralized without affecting their fundamental character. Decision rights might be allocated either up or down along the hierarchy of jobs depending on the degree of complexity of the task. Invariably, in industries in which tasks are highly complex, organizations are compelled to decentralize the decision-making process to individuals closest to the action. The decentralization of decision rights should not necessarily affect the process of establishing the nature of relationships and responsibilities within the firm. In complex industries, for instance, hierarchical organizations might be also decentralized, as in the case of some professional organizations centered on highly intellectualized workers capable of operating autonomously based on their levels of education and experience. The focus of formalization is not the relationship between superior and subordinate, but the relationship between worker and job, which also affects the boundaries of the firm. If decision rules are firm-specific, then there is a need for more control. More formalized organizations require the internalization of tasks, while less formalized organizations permit the externalization of tasks (i.e., the professional conducting the action does not need to be a legal member of the organization). This difference also distinguishes our approach from that of traditional organizational economics in the sense that we are not concerned with the cost of transactions but with the cost of cognitive efforts. In a formal but decentralized organization based on professional work, decisions are not necessarily determined from above, but shaped through the adoption of best practices and guiding policies.
Knowledge as a contingency factor
The main goal of the contingency approach to organizational theory is to tailor the structure of the organization to external sources of uncertainty and complexity (Perrow 1967; Thompson 1967; Lawrence & Lorsch, 1967). Organizations are open systems, vulnerable to environmental contexts. Different conditions lead to the selection of different organizational designs. The core teaching of the contingency theory is that organizations vary in their abilities both to process information about the environment and to coordinate internal activities required for survival.
Among the traditional contingency factors, technology (understood broadly as applied knowledge) plays a promising role because it directly affects the conduct of tasks required to solve valuable productive problems (Donaldson 2001). The logical argument underlying the connection between technology features and organizational structure is that the right match increases the ability of firms to conduct transformation processes compatible with the nature of production. One of the most important theoretical approaches linking technology to structure was originally suggested by Joan Woodward (1980), who claimed that it was possible to develop generalizations about the formal composition of a company based on its fit with different technologies. She essentially argued that different technologies directly determine certain aspects of the organizational structure, such as span of control, centralization of authority, and the formalization of rules and procedures. Perrow (1967) also emphasized the central place occupied by technology in the transformation process, affirming that the type of technology used by the organization determines the most effective structure for successful performance. Other contingency-oriented authors such as Thompson (1967) and Galbraith (1973) also emphasized the role of organizations in controlling increasing degrees of complexity through different types of technology.
Taken together, these studies on technology as a contingency factor demonstrate that certain organizational structures are more appropriate for dealing with uncertainty and complexity than others (Tushman and Nadler 1978; Keller 1994; Larkey and Sproull 1984). This connection has been corroborated by additional research focused on the need to provide information-processing capabilities to decision makers so that tasks can be performed accordingly based on the underlying objectives (Daft and Lengel 1986; Habid and Victor 1991; Rogers and Bamford 2002; Wolf and Egelhoff 2002). As the amount of uncertainty and complexity increases, so too does the imperative for increased information-processing capacity (Burton and Obel 2004). When an objective calls for interdependent activities, for instance, the need to communicate creates difficulties across tasks performed by separate individuals. The organization needs to rely on specific communication protocols between operational units. The ability to deal with input uncertainty and complexity requires activities such as collecting appropriate information, applying information in a timely fashion, transmitting information without distortion, and managing high volumes of information.
The contingency method of dealing with uncertainty and complexity by tailoring organizational structure to features of the information-processing technology is useful but inevitably incomplete. Organizations operating in complex industries require more than just communication functions; they also require computational operations based on previously accumulated knowledge. The ability to process information faster and more accurately, for instance, cannot resolve complex problems unless it is guided by knowledge either acquired from outside or created within the firm (Montibeller et al. 2006). While information indicates what something means, knowledge addresses how to do something (Zander and Kogut 1996). This distinction is highly relevant for the sake of making architectural decisions about the organizational form.
Technology applied to knowledge application is less mechanical or physical than other forms of information technology, at least at this time in history. A broad notion of technology encompasses computer hardware and software, but it is less about the computer itself and more about the knowledge underlying the computational activity. This conceptual definition of technology allows more flexibility in recognizing the various ways technology matters. Computation occurs within information technologies, but it also occurs within the minds of individuals or even as the combination of cognitive and behavioral activities conducted by a team (Forrester 2000; Hutchins 1996; Power and Waddell 2004). These alternative mechanisms of computation based on cognitive skills supplied by individual members of the organization deal with more complex socio-technical arrangements that combine human and machine competencies into a more comprehensive technological apparatus. This dimension of operations based on cognitive functions conducted by teams can profoundly influence the selection of the best organizational design.
The ways in which pieces of applied knowledge possessed by different individuals interact and complement each other have an important impact on organizations. If for every task there is a corresponding knowledge set underlying it, these tasks, along with their knowledge sets, can be combined to conduct increasingly sophisticated behaviors. The underlying knowledge guiding tasks is specific to each particular problem, but we can try to create a science of applied knowledge if the theory focuses exclusively on the structural configuration of knowledge. From a logical perspective, pieces of knowledge are connected to each other temporally within a job: they are conducted either concurrently or sequentially in reference to each other (Thompson 1967; Marks et al. 2010; Burton et al. 2015).
Ordering of tasks within or across jobs has an inherent dimension of temporality that is relevant to the organization given that some of those tasks have to perform coordinating functions expressed through time-based regulations. For this reason, we infer that the temporal connectivity of tasks is enabled by a distinct kind of knowledge with a special functional role in value creation. We call the knowledge promoting standard tasks as content knowledge, whereas this other kind of knowledge promoting regulatory tasks (or meta-tasks) we call connectivity knowledge. The structure of the firm is essentially the reflection of demands generated by the need to apply the latter type of knowledge to the process of configuring a string of interconnected tasks.
The application of knowledge requires a method of execution that is also dependent on knowledge, meaning that content knowledge depends on an additional type of knowledge in order to be properly used in practice. These additional pieces of knowledge correspond to technologies (or techniques) that allow already existing knowledge to be adapted to particular uses. This specific type of knowledge plays the function of connecting existing pieces of knowledge into a coherent temporal flow of action. Another way of putting it is to recognize that time has a relevant role in organizing the use of knowledge for productive processes. Knowledge effectiveness depends on crafting a logical sequence of cognitive steps required to solve a problem or accomplish an objective. For this reason, the application of knowledge through specific cognitive mechanisms depends on an appropriately configured organizational structure that governs the "temporal ordering of tasks."
The distinction between different functions of knowledge justifies the adoption of a terminology that makes a clear distinction between content knowledge dedicated to tasks and connectivity knowledge dedicated to meta-tasks (or how tasks are temporally connected to other tasks in the same job or across jobs). In order to work as a productive resource, knowledge applied to tasks needs to be adequately adapted to the particular objectives of the job to be done. By tailoring previously acquired knowledge to the act of selecting appropriate behaviors according to the nature of problems, the decision maker (as an individual or team) engages in an active and dynamic intellectual effort. The ability to apply the same knowledge to a variety of problems necessitates a complex and highly adaptive cognitive process that combines prior accumulated knowledge together with new collected information about the problem at hand (Sweller 1988).
Most of the cognitive mechanisms employed today to solve complex problems are still under the full control of individuals, in the form of psychological processes and internalized human capital. However, the exercise of individual cognitive capabilities is guided by the organizational context in which the intellectual job is performed. As task complexity increases, the need to apply connectivity knowledge also increases. Up to a certain point, the selective allocation of attention to different types of knowledge can be done by individual agents separately from each other. However, as the amount of knowledge increases beyond a certain threshold, there is a need for a more formal division of the cognitive labor. Increasing volumes of connectivity knowledge means that the organization has to make sure that the necessary cognitive capability is in place. One of the consequences of this institutionalization of cognitive functions is the adoption of increasing degrees of structural formalization responsible for developing and deploying relevant cognitive capabilities.
The economics of knowledge application
Figure 1 below displays the model of organizational capability in a simplified manner in which knowledge is a resource of production (Postrel 2002). It represents the nature of a resource application process based on input-process-output (IPO) episodes composed of three components: (a) the core input from the task environment (represented by knowledge), (b) the transformation process (represented by the organizational structure), and (c) performance outcomes (represented by some measurement of organizational effectiveness). Knowledge application triggers the choice of the best organizational structure, which in turn serves to coordinate cognitive activities that generate performance outcomes. Organizational structure here represents an activity pattern (or a recursive cycle of how processes are conducted in time) that generates the performance space for a particular firm. Organizational capability is the emergent result of the dynamic fit between these IPO elements.Footnote 1
The contingent model of organizational capability (restricted model)
The model defines a problem from the perspective of the content knowledge that is required to solve it. Problem difficulty is an important dimension, and it is defined by the amount of knowledge required to resolve it in given historical context (e.g., based on the stage of scientific development). Increasing degrees of difficulty generate the need for increasing quantities of knowledge and, consequently, an increased demand for knowledge aggregation and synthesis. As tasks and sub-tasks become increasingly interdependent, there is a corresponding increase in knowledge dedicated to the interconnectivity of tasks. Knowledge of task integration can be measured through increasing levels of activity dependency, as originally proposed by Thompson (1967). However, here, we are more interested in describing interdependencies from a cognitive perspective than from a mechanical one, which makes this theory more applicable to post-industrial firms dedicated to professional services than traditional industrial ones dedicated to product manufacturing. In order to combine different pieces of knowledge, a knowledge that goes beyond content knowledge is needed. As previously proposed, we identify it as connectivity knowledge. This type of knowledge is exclusively dedicated to regulatory and/or coordinating functions required in the application of knowledge as a resource.
From a logical perspective, connectivity knowledge can be differentiated according to how it orders tasks (i.e., how tasks are connected to each other). Internal connections consider how sub-tasks are related to each other, whereas external connections consider how tasks (and their sub-tasks) are related to other different tasks. The model relies on the ability to measure the degree of temporal interdependency of intellectual tasks required to accomplish an objective within recursive cycles or episodes of knowledge application. The greater the degree of knowledge integration required for effectiveness and efficiency, the stronger the need to coordinate the overall process of knowledge application through a well-designed sequence of cooperative work. As the number of task components increase, the probability of success of a string of tasks being organized sequentially or concurrently in time diminishes for every new added component. The role of the organizational structure is to assist in this "assembly" process through the economic allocation of cognitive effort.
The formula below suggests an arithmetic of knowledge application, as follows:
$$ {\rho}^{\mathrm{r}}={\sum}_{n=1}^{\mathrm{r}}{\kappa}_{\mathrm{n}}+{\sum}_{m=1}^{\mathrm{s}}{\phi}_{\mathrm{m}} $$
where P refers to a problem with degree of complexity r, K is content knowledge with n distinct nodes up to r, and Φ is connectivity knowledge with m ties between pieces of content knowledge up to s, so that Φ is contained in K*K and s is bounded to r*(r-1)/2. This model can also be represented in a graph P = (K, Φ) that consists of a finite set K of vertices representing content knowledge and a finite set of pairs of vertices Φ = {(κ1, κ2)| κ1, κ2 Є K} representing connectivity knowledge. P is temporally undirected if the pair (κ1, κ2) is the same as the pair (κ2, κ1), while P is temporally directed if the pair (κ1, κ2) is different from (κ2, κ1), meaning that the temporal order of κ1 and κ2 matters for the process of knowledge application. In a mixed graph, either κ1 antecedes κ2 or they occur at the same time, although κ2 cannot antecede κ1. This is the condition for the existence of causation, which is the requirement for the functionality of knowledge as a problem-solving resource. The use of knowledge allows P to be solved through a temporal process. P has a temporal path in a graph in which a tuple of K (κ1, κ2, … κr) generates the conditions for the effective completion of a performance episode and (κn, κn + 1) is contained in Φ for 1 ≤ n ≤ r-1. The structural length of P is the number of pairs of vertices or ties m in Φ on the temporal path and it might involve multiple autonomous performance episodes. In non-deterministic paths, the graph might acquire multiple structural configurations.
An important feature of Formula 1 is that it indicates that the process of addition of individual pieces of (content) knowledge is not an automatic or simple procedure. It requires an additional knowledge type with the specialized function of connecting (or temporally regulating) different pieces of content knowledge being used as resources of productive tasks. Each one of these two types of knowledge guides tasks in certain ways: The first section of the formula generates standard tasks, and the second section generates meta-tasks. What is relevant for organizational structure is particularly the ability to conduct the second kind of tasks: When meta-tasks require increasing cognitive effort comparatively to the amount of cognitive effort required by standard tasks, then increasing degrees of structural formalization is subsequently required for operational effectiveness. In this sense, connectivity knowledge underlies a transition function that connects (κn, κn + 1) through the mapping Γ(Φ) : (κn, Ii) ➔ (κn + 1, I*), in which Ii informs on the results of the application of κn through a task (or set of tasks), generating the conditions for applying κn + 1 with an expected result I*, where the star represents an ideal outcome. A sequence of transformations of this kind requires significant amounts of organized cognitive capabilities proportional to the complexity of the problem being solved.
Figure 2 below illustrates conditions in which the comparative volume of content knowledge and connectivity knowledge changes as the result of the demands for increasing temporal order. Sequential tasks refer to processes of intra-task ordering of content knowledge, while concurrent tasks refer to the process of inter-task ordering of content knowledge. Organizational formalization becomes increasingly relevant when connectivity knowledge becomes comparatively more relevant for problem solving than content knowledge, such that:
Proportion of content knowledge and connectivity knowledge, according to the degree of intra- and inter-task ordering
Proposition #1: If the volume of connectivity knowledge is comparatively higher than the volume of content knowledge, then structural formalization should be high.
Proposition #1A: A task requiring low amounts of connectivity knowledge (when Φ/K < 1) is better conducted by less formalized organizational structures.
Proposition #1B: A task requiring high amounts of connectivity knowledge (when Φ/K > 1) is better conducted by more formalized organizational structures.
Hypothesis formulation and empirical model
The model advocated in this paper targets firms operating in complex environments, particularly in competitive professional service industries. We have adopted the healthcare delivery sector as a preferred point of reference for model specification and empirical testing. Conducting the analysis through the lens of one exemplary industry facilitates explanation and avoids too much abstraction. In addition, knowledge plays an especially relevant role in industries that are highly dependent on the work of key professionals such as physicians, who plan and conduct treatments based on the use of existing knowledge created in scientific disciplines, such as Medicine, Biology and Pharmacology.
Figure 3 represents the proposition articulated above in a graphic version and recognizes the possibility of hybrid structures. Inter-content ordering requires knowledge for concurrent connectivity, while intra-content ordering requires knowledge for sequential connectivity. When concurrent and sequential forms of connectivity are low, then less formalized organizational structure is sufficient to process the amount of connectivity knowledge required for the completion of the job. In other words, there is no need for the organization to formalize procedures; simple common sense based on experience and intelligence from practitioners suffice. When concurrent and sequential forms of connectivity are high, then more formalized organizational structure is needed in order to elaborate the amount of connectivity knowledge required for the completion of the job. In this case, the organization is required to generate more clear decision rules based on articulated policies and procedures. The organization has also to conduct activities related to monitoring and coordination. Translating these propositions to the healthcare sector, we predict a particular relationship between the volume of connectivity knowledge required by therapeutic services and the governance arrangement adopted by the hospital to manage the work of physicians, expressed in the following testable hypothesis:
Conversion of connectivity knowledge into organizational structure (with indication of stable structures)
Core hypothesis: Therapeutic services requiring the application of high volumes of connectivity knowledge will have greater odds of being provided by a more formalized hospital than by a less formalized hospital.
In order to test this hypothesis, we apply a longitudinal logistic regression model on a pilot dataset generated from two annual surveys conducted by the American Hospital Association in 2005 and 2014. The logistic regression technique is used widely in many fields, including the medical and social sciences (Freedman 2009). For example, logistic regression may be used to predict whether a patient has a given disease (e.g., diabetes; coronary heart disease), based on observed characteristics of the patient, such as age, sex, body mass index, results of various blood tests, etc. (Truett et al. 1967). In the present study, this statistical method is used to predict whether the organizational structure of a hospital is related to the connectivity knowledge required to adequately treat patients with certain medical conditions. We assume that the hospital's scope of services is an indication that it has the necessary competence to provide the service to a population of patients.
Hospital's organizational structure is conditioned by the type of contract signed with physicians operating in its premises. These contracts regulating the relationship between hospitals and physicians vary in the degree of formalization of roles and expectations (Scott 1982). In recent decades, various models of hospital-physician relationships have evolved in the USA, and hospitals have begun adopting different organizational options (Robison 1999; Scott et al. 2000). Ways of incorporating physicians into the hospital fall along a broad continuum ranging from loose networks to tightly coupled hierarchies. In addition to the traditional open system model (OSM), the American Hospital Association (AHA 2016) has recently identified eight distinct forms of hospital-physician contracting, including (1) Independent Practice Association (IPA), (2) Group Practice without Walls (GPWW), (3) Open Physician-Hospital Organization (OPHO), (4) Closed Physician-Hospital Organization (CPHO), (5) Management Services Organization (MSO), (6) Integrated Salary Model (ISM), (7) Equity Model (EM), and (8) Foundation Model (FM).
Currently, the most common hospital-physician relationships are governed by OSM, IPA, CPHO, and ISM. OSM and IPA together represent approximately 56% of US hospitals, ISM approximately 29%, CPHO and MSO approximately 8%, and the others approximately 4% (AHA 2016). These organizational arrangements reflect the level of risk shared by each party, the integration of operations, the degree of exclusivity, and the investment of capital. In the OSM, physicians own their practices and admit patients to one or more hospitals on whose medical staff they serve. The requirements for membership are few, as are the responsibilities (Casalino and Robinson 2003). In the IPA, physicians continue owning their practices but become formally affiliated with a hospital and are motivated to increase compliance with the hospital's management initiatives to decrease costs and increase quality. In CPHO, physicians have some degree of independence, but are constrained by exclusivity contracts and share responsibilities for treatment outcomes through the adoption of standardized business practices, joint planning, and clinical integration (Burns et al. 2000; Cuellar and Gertler 2006). Finally, in the ISM, physicians are employed by the hospital, which purchases both physical and intangible assets as well as requires physicians to operate in centralized locations for the sake of coordination (Morrisey et al. 1996). The literature on the subject (Robison 1999; Casalino and Robinson 2003) suggests that these variations on the hospital-physician relationship correspond to four basic forms of governance described in organizational economics: arm's length, alliance, joint venture, and hierarchy (Williamson 1985).
Similarly to Conner and Prahalad (1996), we focus on basic choices and concentrate on polar organizational modes based on the binary expression of a hospital's degree of formalization between network and hierarchy. This means that our empirical model classifies OSM and IPA as open networks, and CPHO and ISM as closed hierarchies,Footnote 2 which intends to represent and contrast the two polar ends of a structural continuum.Footnote 3 As noticed above, these are recurrent configurations and might represent relatively stable models of contracting between hospitals and physicians in the current historical context of the American healthcare service market. In the present stage of theory development, it is preferable to apply simplified methodological approaches and improve precision in future studies. In addition, data provided by the American Hospital Association (AHA) annual surveys is still inadequate for a more precise specification of the current model. For these different reasons, we have decided to apply the binomial logistic regression methodology instead of the ordered logistic regression one.
Like other forms of regression analysis, the logistic regression makes use of one or more predictor variables that may be either continuous or categorical. However, unlike ordinary linear regression, logistic regression is used for predicting binary-dependent variables rather than a continuous outcome (Hosmer and Lemeshow 2000). In the present case, independent variables are represented in a categorical form, expressed in binary terms (i.e., dummy variables indicating whether a selected therapeutic service is offered or not by a hospital). At this stage of model development—in which knowledge required to conduct particular tasks is not yet fully described in terms of its composition of content and connectivity knowledge, we take a reasonable methodological shortcut. At this point, we are essentially interested in measuring the probability that therapeutic services (ordered by the comparative magnitude of connectivity to content knowledge) will be offered by an integrated, hierarchical hospital. The ordering of therapeutic services was generated in clusters of medical fields through the comparison of therapeutic services based on their respective configuration of tasks and meta-tasks.
The probability for a certain therapeutic service to be offered by a formalized mechanism of governance varies between 0 and 1. The probability of a service occurring in a formalized hospital is p, and the probability of the same service occurring in a less formalized hospital network is q = 1 - p. Odds are defined as the ratio of the probability of success and the probability of failure, such that:
$$ {\mathrm{odds}}_{\left(\mathrm{formalization}\right)}=\mathrm{p}/\left(1-\mathrm{p}\right)\ \mathrm{or}\ \mathrm{p}/\mathrm{q} $$
The model can then be fully expressed as follows:
$$ \text{logit(p)} = \text{log}(\text{p}/1-\text{p}) = {\beta}_{0} + {\beta}_{1} \text{T}(\Phi) + {\beta}_{2} \text{S}(\sigma) + \beta_{3} \text{R}(\delta) + \varepsilon $$
where p indicates the probability of a formalized hospital to be selected, β1 are the regression coefficients associated with the selected group of services represented by the independent variable T(Φ) also expressed in binary terms (i.e., 1 if offered and 0 if not offered). The other independent variables S(σ) and R(δ) are control measures for hospital size and the degree of rivalry in the region, respectively. They serve to isolate other potential relevant sources of influence upon the choice of organizational structure such as organizational complexity and competitive intensity -- and can be understood as other potential relevant contingency factors. The estimated β1 coefficients produce results in terms of log-odds and then converted to odds ratios, as explained in the empirical section below.
Empirical testing
We tested the hypothesis in a model considering 15 (fifteen) therapeutic services, three for each one of these core medical areas of specialization: cardiology, oncology, orthopedics, gastroenterology, and central nervous system. Hospitals are usually compared to each other based on these traditional services (e.g., US News and World Report's hospital rankings). Each one of them were assessed and classified with the purpose of ranking each procedure according to the estimated amount of connectivity knowledge they require for effectiveness. The purpose was simply to order services in relation to each other, which is the first step of creating a systematic arithmetic of knowledge application. Given that there are three services considered for each medical area, they were labeled as low, medium, and high based on the amount of connectivity knowledge they require (see Fig. 4 below).
Cluster of therapeutic services per medical field ordered according to the volume of connectivity knowledge
The assessment relied on an exploratory methodology based on the episodic theory of performance effectiveness proposed in Marks et al. (2010). Performance episodes are meaningful periods of time during which members of a team work to achieve shared goals and feedback becomes available. Episodes (or recurrent cycles of performance) depend on three superordinate team process dimensions, including (a) action, (b) transition, and (c) interpersonal stages of performance. Action processes occur during performance episodes and include specific activities for the accomplishment of goals. Transition processes occur as team cycle from one performance episode to another. During transition phases of performance, team reflects on how they have previously functioned and develops plans for future efforts. Performance episodes also encompass the need to manage interpersonal processes that occur at any time during the team's life cycle and include managing conflicts, motivation, and affect levels (Eddy et al. 2013). The transition stage was used as a proxy for the amount of connectivity knowledge being processed and applied during the performance of a job. The increasing amount of time dedicated to transition stages is an indication of task complexity and uncertainty, which signals the application of large amounts of connectivity knowledge. It is reasonable to assume that the amount of feedback, control, revision, and coordination required by a therapeutic service reflects the amount of (temporal) order required by tasks contained in a job. Treatments were ranked according to the sophistication of transition processes they typically require assuming a certain amount of standardization of practices across hospitals. A more precise specification of the IPO model with different levels of abstraction is suggested by Fig. 5 below.
The contingent model of organizational capability, decomposed in different levels of abstraction
We also limited the sample of hospitals to only those ones located in a restricted geographic area in order to control for the heterogeneity of external factors and minimize the effects of different degrees of competitive intensity. This means that R(δ) in Formula #3 is kept fixed. Only hospitals located in New York and New Jersey were considered in the study based on data provided in the 2005 and 2014 National Hospital Survey (conducted by the American Hospital Association—AHA). From the approximately 600 hospitals covered by the surveys in both states, we selected a sample of 105 hospitals with a complete or unabiguous set of data and stable scope of services in those selected medical areas, including 57 hospitals in New York and 48 in New Jersey. Approximately 60% of all of them were classified as having less formalized structures (i.e., they operate primarily through a network of physicians) and 40% classified as having more formalized structures (i.e., they operate primary through salaried physicians or exclusivity contracts).Footnote 4
The option to consider hospitals with a fixed scope of services (in those five selected medical fields) within a decade intends to simplify the testing procedure and relies on the assumption that changes in core services are costly and infrequent. Hospital's scope reflects previous investment commitments and is likely to persist in time as any other major investment in "sticky" resources (Ghemawat 1991). This methodological approach is also justified on the ground that the objective of the present study is to identify stable relationships. The theoretical model relies on the evolutionary assumption that only the most efficient and effective structural solutions persist in history (Nelson and Winter 1982). Dealing with this kind of sample, however, requires the adoption of special methodological procedures. Standard logistic regression models depend on independent binary outcomes, whereas here the outcomes arise from the dependency of multiple observations per subject (i.e., the same hospitals located in a geographic region for a period of a decade). In this case, the independent variables and the error terms are not independent from each other. This means that the estimated logistic coefficients for therapeutic services are not an unbiased measure of true parameters. The estimation of the standard errors needs to deal with this deviation from the standard case and assume that, if the measurement and estimation were repeated, we would observe results in the same range as reported (Stata 2013, pp. 309–310). In addition, we also adopt a correlation structure in which observations are only related to their own past values through a first-order autoregressive (AR-1) process. For these reasons, the logistic regression coefficients are estimated based on the "sandwich" estimator of variance, which is a more robust technique developed independently by Huber (1967) and White Jr. (1980).Footnote 5 This procedure reflects the average dependence among the repeated observations over subjects when the data do not come from either a simple random sample or the distribution of independent variables and error terms are not independently and identically distributed (i.i.d.). The resulting estimator for the odds remains consistent given that the variance estimates are based on the weak assumption that the weighted average of the estimated correlation matrices converges to a fixed matrix (Liang and Zeger 1986; Hu et al. 1998).
All 15 procedures are included together in the same model controlled by hospital size (measured by the number of beds). The odds ratio for a unit change in each covariate are reported in Fig. 6 below, which estimates the coefficients β1 in Formula #3 for each one of the selected procedures (ordered in blocks of three by medical field). Here, coefficients in log-odds units have already been converted to odds ratio in order to facilitate interpretation.Footnote 6 The odds ratio for a unit change in each covariate are predicted to grow by the amount of the estimated coefficient. At fixed values of the other covariates, cardiac surgery, for instance, has over two times the odds of being offered by Integrated Salary Model (ISM) than by Open System Model (OSM) hospitals, whereas the odds for catheter procedures and cardiac intensive care are 0.34 and 0.56, respectively. Odds ratios greater than 1 correspond to positive effects because they increase the odds. Those between 0 and 1 correspond to negative effects because they decrease the odds. Odds ratios of exactly 1 correspond to "no association." In this case, both catheter and cardiac intensive care are more likely to be offered by a hospital adopting a less formalized structure. This does not mean that they are not offered by formalized hospitals, only that their likelihood is comparatively lower.
Parameter estimates for therapeutic services (ordered by medical areas)
The present study is not particularly concerned with the exact value of coefficients per therapeutic service, but with the comparative ordering of the magnitude of these coefficients within each selected medical area. In the example above, the coefficients for the three services in the cardiology field are ordered according to the predicted classification of the amount of connectivity knowledge, as previously shown in Fig. 4. This means that the result is consistent with the classification that catheter, having the lowest level of connectivity knowledge, will have a lower odds ratio than cardiac surgery, which has the highest level of connectivity knowledge. Cardiac intensive care has an odds ratio with a magnitude in between these two services (although not exactly symmetric), which confirms its classification as having a medium level of connectivity knowledge. Empirical results for the other medical fields largely corroborate the classification specifying the amount of comparative connectivity knowledge per therapeutic service. Even in the case of surgical intensive services in which the odds ratio coefficient is not statistically significant, there is an indication that it has a medium level of connectivity knowledge compared to colonoscopy and robotic surgery. When the coefficient is not statistically significant, it indicates a lack of statistical association, meaning that the service is equally likely to be present in both types of hospitals.
The control variable for hospital size is not statistically significant, meaning that the increase of the number of beds offered by a hospital does not affect the degree of structural formalization.Footnote 7 The constant is also not statistically significant. The overall logistic model generates a pseudo R2 of 0.1926, and a Wald chi2 of 236.80 with Prob > chi2 of 0.0000. The log pseudo-likelihood is − 578.93. Combined, these results show a consistent and strong support for the theoretical model, although not in a deterministic way. More than 80% of the variance in hospital's choice of degree of formalization is not being explained by the model. This shows that other factors influence organizational structure in the healthcare sector. However, in a complex industry like this one, being able to account for approximately 20% of the behavior of a phenomenon can be considered quite relevant for theory development.
Post-estimation tests corroborate this conclusion by generating strong support for the model. Figure 7 presents the classification statistics and classification table showing that the overall rate of correct classification is estimated to be 74.29%, with 83.33% of the more formalized hospitals correctly classified (specificity) and 62.22% of the less formalized hospitals correctly classified (sensitivity), both groups having probabilities greater than the standard cutoff of 0.5. Figure 8 depicts the ROC curve (which accounts for the receiver operating characteristics). This is a graph of sensitivity versus one minus specificity and calculates the area under the curve. A model with no predictive power would be a 45o line. The greater the predictive power, the more bowed the curve, and hence the area beneath the curve is often used as a measure of the predictive power (STATA 2013: pp. 1119–1120). The area under ROC curves generated by the specified model is 0.7744, which is a measure of high predictive power.
Statistic classification for the dependent variable
The ROC curve
Although competition plays a key role in conditioning the performance of organizations, strategic management is increasingly concerned with the impact of organizational capability in a firm's ability to deliver value to end consumers. Competing through capabilities means that organizations have to pay close attention to factors that significantly influence their long-term sustainability. Organizations that survive and grow in open market systems are usually those capable of applying relevant knowledge to the process of transforming inputs into outputs. Firms are special organizations precisely because they specialize in the process of conducting difficult and complex tasks through the application of knowledge. However, knowledge is not useful unless the organization develops the necessary infrastructure and complementary functions to apply it adequately. The application of knowledge as a productive resource is far from being a trivial procedure. What matters the most in this regard is the firm's ability to align the knowledge required to conduct productive tasks with the capability of processing it. In other words, the organization depends on the fit between its knowledge function and its governing structure. A contingency perspective of the organizational capability theory of the firm promises to find the right match between them.
Not all types of knowledge affect organizational form equally. Only that portion of knowledge dedicated to regulating multi-task jobs ultimately affects how the firm is internally structured. This point is extremely important because previous contingency models treated knowledge as a homogenous resource from a functional perspective. However, it is clear that knowledge can have different productive functions, including the ability to integrate different pieces of knowledge through organized cognitive work. This type of knowledge is used to temporally order content knowledge, which has relevant implications for the choice of governance structures, in particular those features of the structure concerned with the formalization of work processes. Increasing volumes of knowledge dedicated to regulating tasks (i.e., knowledge applied to meta-tasks) demand the adoption of coordination procedures expressed through the clear articulation of decision-making standards. The contingency perspective informs the firm on how to best design the features of the organizational architecture: less formalized structures should be adopted when the volume of connectivity knowledge is relatively low, while more formalized structures should be adopted when the volume of connectivity knowledge is relatively high.
The empirical evidence seems to support the theoretical relationship between connectivity knowledge and the degree of organizational formalization, although not necessarily in deterministic ways: the model was capable of explaining 19% of the variability in organizational forms adopted by hospitals located in a particular geographic location of the USA. This region is highly competitive and demonstrates how imperative it is for hospitals to make the right investments in the long-term scope of therapeutic services. The empirical findings generated by this study are not conclusive but support the emergence of a contingency theory of organizational capability. Future studies will need to fill the existing empirical gap based on improved classification of jobs according to their dimensions of tasks and meta-tasks. Instead of a comparison based on a simple ordinal classification, as done here, the next phase in this research program will require a much more precise description and measurement of each therapeutic procedure. For simplification reasons, the present study assumed that medical treatments are homogeneous across hospitals within the same geographic region, which might not accurately reflect the actual variability of practices. We have not yet achieved this stage of analytical precision due to limitations in the availability of appropriate databases. Much work still needs to be done in order to make a conclusive judgment of the validity of the suggested theory. The conceptual model, nevertheless, has passed a first and relevant attempt at falsification, which is important to motivate additional studies on this particular direction, not only in the healthcare industry but also in other industrial sectors highly dependent on the application of connectivity knowledge for the delivery of value to end consumers.
The data that support the findings of this study are available from the American Hospital Association (AHA) but restrictions apply to the availability of the database, which were used under license for the current study through the University of Connecticut's library archives, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of AHA and Uconn.
The direction of the arrows does not necessarily indicate causal relationships, but demands for alignment and fit (for an elaborated notion of fit, see Drazin R and Van de Ven A 1985; Van de Ven and Drazin 1985; ).
Both of these hospital arrangements are considered decentralized hierarchies because physicians continue to have full control of decision-making regarding the adoption of therapeutic practices, which is consistent with the American law that prohibits the "corporate practice of medicine" (i.e., it is always the physician who makes the ultimate decision, not the hospital) (Robison 1999).
Open networks do not represent informal structures per se, but simply less formal ones compared to those promoted by closed hierarchies. Even when contracts allow the autonomy of individual physicians, they are bound to a series of professional expectations that include the adoption of ethical standards as well as best practices promoted in medical schools and medical professional associations.
Given that all hospitals are necessarily decentralized in the USA as the result of the illegality of the corporate practice of medicine, it is possible to say that the degree of organizational centralization is also being controlled in this model through the selection of the sample. In this sense, professional organizations can be very different from traditional manufacturing firms.
This procedure is conducted in STATA 13 through the command vce(robust) (combined with frequency weights fw).
The conversion process followed the tradition of exponentiating the original coefficient. This procedure is conducted in STATA 13 through the command or.
Regional competitiveness was hold fixed and generate results valid for only high levels of competitiveness among hospitals and medical insurance.
American Hospital Association (2016) Trendwatch Chartbook 2016, Trends Affecting Hospitals and Health Systems. American Hospital Association, Washington DC.
Argyres N, Silverman B (2004) R&D, organization structure, and the development of corporate technological knowledge. Strateg Manag J 25:929–958
Arrow K, Hahn F (1971) General competitive analysis. North Holland Publishing, Amsterdam/Oxford
Becerra-Fernandez I, Sabherwal R (2001) Organizational knowledge management: a contingency perspective. J Manag Inf Syst 18(1):23–55
Birkinshaw J, Nobel R, Ridderstrale J (2002) Knowledge as a contingency variable: do the characteristics of knowledge predict organization structure? Organ Sci 13(3):274–298
Burns L, Bazzoli G, Dynan L, Wholey D (2000) Impact of HMO market structure on physician-hospital strategic alliance. Health Services Research 35:1;101–132.
Burton R, Obel B (2004) Strategic organizational diagnosis and design: the dynamics of fit. Kluwer Academic Publishers, Dordrecht
Burton R, Obel B, Hakonsson DD (2015) Organizational design: a step-by-step approach. Cambridge University Press, Cambridge
Casalino L, Robinson J (2003) Alternative models of hospital-physician affiliation as the U.S. moves away from tight managed care. The Milbank Quarterly 81:331–351
Chandler A (1962) Strategy and structure: chapters in the history of the industrial Enterprise. MIT Press, Cambridge
Cockburn I, Henderson R, Stern S (2000) Untangling the origins of competitive advantage. Strateg Manag J 21:1123–1145
Conner K, Prahalad CK (1996) A resource-based theory of the firm: knowledge versus opportunism. Organ Sci 7(5):477–501
Cuellar A. Gertler P (2006) Strategic integration of hospitals and physicians. J Health Econ 25(1);1–28.
Daft R, Lengel R (1986) Organizational information requirements, media richness and structural design. Management Science, May, pp 554–571
Donaldson L (2001) The contingency theory of organizations. Sage Publications, Thousand Oaks
Dosi G, Nelson R, Winter S (2000) Introduction in the nature and dynamics of organizational capabilities. Oxford University Press, New York
Drazin R, Van de Ven A (1985) Alternative forms of fit in contingency theory. Adm Sci 30:514–539
Eddy E, Tannenbaun S, Mathieu J (2013) Helping teams to help themselves: comparing two team-led debriefing methods. Pers Psychol 66:975–1008
Forrester R (2000) Capturing learning and applying knowledge: an investigation of the use of innovation teams in Japanese and American automotive firms. J Bus Res 47(1):35–45
Freedman D (2009) Statistical models: theory and practice. Cambridge University Press, Cambridge
Galbraith J (1973) Designing complex organizations. Addison-Wesley, Reading
Ghemawat P (1991) Commitment: the dynamic of theory of strategy. Harvard University Press, Cambridge
Gold A, Malhotra A, Segars A (2001) Knowledge management: an organizational capabilities perspective. J Manag Inf Syst 18(1):185–214
Grant R (1996) Toward a knowledge-based theory of the firm. Strateg Manag J Spec Issue 17:109–122
Habid M, Victor B (1991) Strategy, structure, and performance of U.S. manufacturing and services MNCs: a comparative analysis. Strateg Manag J 12:589–606
Hosmer D, Lemeshow S (2000) Interpretation of the fitted logistic regression model, in. Applied logistic regression (2nd Ed). Wiley, Hoboken
Hu F, Goldberg J, Hedeker B, Pentz MA (1998) Comparison of population-averaged and subject-specific approaches for analyzing repeated binary outcomes. Am J Epidemiol 47(7):694–703
Huber P (1967) The behavior of maximum likelihood estimates under nonstandard conditions. In: Vol. 1 of proceedings of the fifth Berkeley symposium on mathematical statistics and probability. University of California Press, Berkeley, pp 221–233
Hutchins E (1996) Cognition in the wild. The MIT Press, Cambridge
James C (2003) Designing learning organizations. Organ Dyn 32(1):46–61
Keller R (1994) Technology-information processing fit and the performance of R&D project groups: a test of contingency theory. Acad Manag 37:167–179
Kim J, Park J, Prescott J (2003) The global integration of business functions: a study of multinational businesses in integrated global industries. J Int Bus Stud 34(4):327–344
Kogut B, Zander U (1992) Knowledge of the firm, combinative capabilities, and the replication of technology. Organizational Sci 3:383–397
Lam A (2000) Tacit knowledge, organizational learning, and societal institutions: an integrated framework. Organizational Stud 21(3):487–513
Larkey P, Sproull L (1984) Advances in information processing in organizations. JAI Press, Greenwich
Lawrence P, Lorsch J (1967) Organization and environment: managing differentiation and integration. Harvard University, Boston
Liang K, Zeger S (1986) Longitudinal data analysis using generalized liner models. Biometrika 73(1):13–22
Mahoney J, McGahan A (2007) The field of strategic management within the evolving science of organization. Strateg Organ 5(1):535–550
Marks M, Mathieu J, Zaccaro S (2010) A temporally based framework and taxonomy of team processes. Acad Manag 3:356–376
Mathieu J, Gilson L, Ruddy T (2006) Empowerment and team effectiveness: an empirical test of an integrated model. J Appl Psychol 9:97–108
Mintzberg H (1979) The structuring of organizations: a synthesis of the research. Prentice-Hall, New Jersey
Monteverde K (1995) Technical dialog as an incentive for vertical integration in the semiconductor industry. Manag Sci 1624–1638
Montibeller G, Shaw D, Westcombe M (2006) Using decision support systems to facilitate the social process of knowledge management. Knowl Manag Res Pract 4:125–137
Moran P, Ghoshal S (1996) Value creation by firms. Academy of Management Proceedings, Meeting Abstract Supplement, pp 41–45
Morrisey M, Wedig G, Hassan M (1996) Do nonprofit hospitals pay their way? Health Affrairs 4:132–144
Nelson R, Winter S (1982) An evolutionary theory of economic change. Harvard University Press, Cambridge
Nerur S, Rasheed A, Nataraja V (2008) The intellectual structure of the strategic management field: an author co-citation analysis. Strateg Manag J 29:319–336
Nickerson J, Zenger T (2004) A knowledge-based theory of the firm: the problem-solving perspective. Organ Sci 15:617–632
Nonaka I, Takeuchi H (1995) The knowledge-creating company. Oxford University Press, New York
Perrow C (1967) A framework for the comparative analysis of organizations. Am Soc Rev 34:194–208
Pertusa-Ortega E, Zaragoza-Saez P, Claver-Cortes E (2010) Can formalization, complexity, and centralization influence knowledge performance? J Bus Res 63:310–320
Postrel S (2002) Island of shared knowledge: specialization and mutual understanding in problem-solving teams. Organ Sci 13(3):303–320
Power J, Waddell D (2004) The link between self-managed work teams and learning organizations using performance indicators. Learn Organ 11(2/3):244–259
Ramos-Rodrigues A, Ruiz-Navarro J (2004) Changes in the intellectual structure of strategic management research: a Bibliometric study of the strategic management journal, 1980-2000. Strateg Manag J 25:981–1004
Robison J (1999) The corporate practice of medicine: competition and innovation in the health care. University of California Press, Berkeley
Rogers P, Bamford C (2002) Information planning process and strategic orientation: the importance of fit in high-performing organizations. J Bus Res 55:205–215
Ryan R, Deci E (2000) Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol 55:68–78
Scott W (1982) Managing professional work: three models of control for health organizations. Health Serv Res 17:213–240
Scott W, Ruef M, Mendel P, Caronna C (2000) Institutional change and healthcare organizations: from professional dominance to managed care. The University of Chicago Press, Chicago and London
STATA (2013) Stata Base Reference Manual, Stata 13: Stata Press. College Station, Texas
Sweller J (1988) Cognitive load during problem solving: effects on learning. Cogn Sci 12:257–285
Teece D (1996) Firm organization, industrial structure, and technology innovation. J Econ Behav Organ 31:193–224
Thompson J (1967) Organizations in action. McGraw Hill, New York
Truett J, Cornfield J, Kannel W (1967) A multivariate analysis of the risk of coronary heart disease in Framingham. J Chronic Dis 20:511–524
Tushman M, Nadler D (1978) Information processing as an integrating concept in organizational design. Acad Manag 3:613–624
Van de Ven A, Drazin R (1985) The concept of fit in contingency theory. Res Organ Behav 7:333–365
White HL Jr (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838
Williamson O (1985) Economic institutions of capitalism. The Free Press, New York
Winter S (1998) Knowledge and Competence as Strategic Assets in The Competitive Chanllenge: Strategies for Industrial Innovation and Renewal, ed. D. Teece. Ballinger, Cambridge, pp 159–184
Winter S (2003) Understanding dynamic capabilities. Strateg Manag J Special Issue 24(10):991–995
Wolf J, Egelhoff W (2002) A reexamination and extension of international strategy-structure theory. Strateg Manag J 23:181–189
Woodward J (1980) Industrial organization: theory and practice. Oxford Press, Oxford
Zander B, Kogut U (1995) Knowledge and the speed of the transfer and imitation of organizational capabilities: an empirical test. Organ Sci 6:76–92
Zander B, Kogut U (1996) What firms do? Coordination, identity, and learning. Organ Sci 7:502–518
This paper has benefited from many different people who directly and indirectly contributed to this research project. In an initial phase of development, I would like to thank the support of the Fairleigh Dickinson University's Provost Seed Grant, Dr. Joel Harmon, and Dr. Dennis Scotti as the co-author of the manuscript presented at the 2014 Eastern Academy of Management Meeting under the title "Effective Alignment Between Hospital and Physicians." In a second phase of development, I would like to thanks Dr. Lucy Gilson, Dr. John Mathieu, and the financial support provided by the University of Connecticut. I benefited from the feedback provided by one anonymous referee at the 2018 Academy of Management, Chicago, where the paper was presented under the title "Knowledge Technology and Organizational Structure." I am also grateful for the guidance of two anonymous referees at the Journal of Organizational Design. Special thanks goes to Joshua Coron for research assistance and to Sarah Earle for editing advice. All errors are mine.
This research project received funding from the University of Connecticut's Business School and Fairleigh Dickinson University's Provost Seed Grant, which contributed to the purchase of surveys and cover various other research expenses.
Business School, Management Department, University of Connecticut at Stamford, 1 University Place, Office #382, Stamford, CT, 06901, USA
Rogerio S. Victer
The present manuscript has just one author. The author read and approved the final manuscript.
Correspondence to Rogerio S. Victer.
The author declares that there is no competing interests for this manuscript.
Victer, R.S. Connectivity knowledge and the degree of structural formalization: a contribution to a contingency theory of organizational capability. J Org Design 9, 7 (2020). https://doi.org/10.1186/s41469-020-0068-3
Capability theory of the firm
Contingency theory | CommonCrawl |
Sequences and Series
Introduction to Sequences
Introduction to Arithmetic Progressions
Recurrence relationships for AP's
Terms in Arithmetic Progressions
Graphs and Tables - AP's
Notation for a Series
Arithmetic Series (defined limits)
Arithmetic Series (using graphics calculators)
Applications of Arithmetic Progressions
Introduction to Geometric Progressions
Recurrence relationships for GP's
Finding the Common Ratio
Terms in Geometric Progressions
Graphs and Tables - GP's
Geometric Series
Geometric Series (using graphics calculators)
Infinite sum for GP's
Applications of Geometric Progressions
Applications of Geometric Series
Sequences and Saving Money (Investigation) LIVE
First Order Linear Recurrences Introduction
Graphs and Tables - Recurrence Relations
Solutions to Recurrence Relations
Steady state solutions to recurrence relations
Applications of Recurrence Relations
Level 7 - NCEA Level 2
Recall that every geometric sequence begins as $ar,ar^2,ar^3,...$ar,ar2,ar3,... and that the $n$nth term is given by:
$t_n=ar^{n-1}$tn=arn−1
Having a formula for the $n$nth term allows us to quickly generate a table of values for the sequence. For example in the sequence $12,18,27,\dots$12,18,27,… the first term is $12$12 and the common ratio is $1.5$1.5 and so the general term is given by the formula $t_n=12\times\left(1.5\right)^{n-1}$tn=12×(1.5)n−1 . By substituting for $n$n appropriately and using a scientific calculator, we can quickly generate the following table of the first $7$7 terms of the sequence:
tn 12 18 27 40.5 60.75 91.125
Perhaps more interestingly though is the different types of graphs that geometric sequences correspond to. Usually the graphs are not linear like arithmetic progressions. Graphs of geometric sequences are best known as rising or reducing graphs where the rate of rising continually changes, resulting in a curved growth or decay path. This happens whenever the common ratio is positive like the geometric progression depicted in the above table. However, when the common ratio is negative, the values of successive terms flip their sign so that the graph is depicted as either a growing or diminishing zig-zag path. Think, for example, about the geometric progression that that is identical to the one in the table, but has a negative ratio $r=-1.5$r=−1.5 so that its $n$nth term is given by $t_n=12\times\left(-1.5\right)^{n-1}$tn=12×(−1.5)n−1 . The new table becomes:
tn 12 -18 27 -40.5 60.75 -91.125
Checking, for $n=1$n=1, we have $t_1=12\times\left(-1.5\right)^{1-1}=12$t1=12×(−1.5)1−1=12 and for $n=2$n=2 we have $t_2=12\times\left(-1.5\right)^{2-1}=-18$t2=12×(−1.5)2−1=−18 so even numbered terms become negative and odd numbered terms become positive.
Here is a graph of the two geometric sequences depicted in both tables. Note that the odd terms of the zig-zag graph coincide with the terms of the first geometric progression.
Note also that had the absolute value of the ratios of both geometric progressions been less than 1, then the the absolute value of the terms in both sequences would be reducing in size.
The $n$nth term of a geometric progression is given by the equation $T_n=2\times3^{n-1}$Tn=2×3n−1.
Complete the table of values:
$n$n
$1$1 $2$2 $3$3 $4$4 $10$10
$T_n$Tn
What is the common ratio between consecutive terms?
Plot the points in the table that correspond to $n=1$n=1, $n=2$n=2, $n=3$n=3 and $n=4$n=4.
Loading Graph...
If the plots on the graph were joined they would form:
a straight line
a curved line
On Mercury the equation $d=1.5t^2$d=1.5t2 can be used to approximate the distance in metres, $d$d, that an object falls in $t$t seconds, if air resistance is ignored.
Complete the table. Do not round any values.
$0$0 $2$2 $4$4 $6$6
Graph the function $d=1.5t^2$d=1.5t2.
Use the equation or otherwise to determine the number of seconds, $t$t, that it would take an object to fall $5.6$5.6m. Give the value of $t$t to the nearest second.
A new car purchased for $\$38200$$38200 depreciates at a rate $r$r each year.
Use the table of values to determine the value of $r$r.
years passed ($n$n) $0$0 $1$1 $2$2
value of car ($A$A) $38200$38200 $37818$37818 $37439.82$37439.82
Determine the rule for $A$A, the value of the car, $n$n years after it is purchased.
Assuming the rate of depreciation remains constant, how much can the car be sold for after $6$6 years? Give your answer to the nearest cent.
A new motorbike purchased for the same amount depreciates according to the model $V=38200\left(0.97^n\right)$V=38200(0.97n). Which vehicle depreciates more rapidly?
M7-3
Use arithmetic and geometric sequences and series
Apply sequences and series in solving problems | CommonCrawl |
105 articles found
Optical Detection of Green Emission for Non-Uniformity Film in Flat Panel Display
Fuming Tzu, Jung-Hua Chou
Subject: Engineering, Automotive Engineering Keywords: optical, green, colour difference, chromaticity, just noticeable difference
Among colours, the green has the most sensitivity in human vision so that green defects on displays can be effortlessly perceived by a photopic eye with the most intensity in the wavelength 555 nm of the spectrum. With the market moving forward to high resolution, displays can have resolutions of 10 million pixels. Therefore, the task detects the appearance the panel using ultra-high resolutions in TFT-LCD. The machine vision associated with reflective chromaticity spectrometer quantises the defects are explored, such as blackening and whitening. The result shows the significant phenomena to recognize the non-uniformity film-related chromatic tendency. In contrast, the quantitative assessment illuminates that the chromaticity CIE xyY at 0.001 is a just noticeable difference (JND) and detects even more sensitivity. Moreover, an optical device associated with a 198 Hg discharge lamp calibrates the spectrometer accuracy.
Choosing the Target Difference ("effect size") for a Randomised Controlled Trial - DELTA2 Guidance
Jonathan A. Cook, Steven A. Julious, William Sones , Lisa V. Hampson, Catherine Hewitt, Jesse A. Berlin, Deborah Ashby, Richard Emsley, Dean A. Fergusson, Stephen J. Walters, Edward C.F. Wilson, Graeme Maclennan, Nigel Stallard , Joanne C. Rothwell, Martin Bland, Louise Brown , Craig R. Ramsay, Andrew Cook, David Armstrong, Doug Altman, Luke David Vale
Subject: Medicine & Pharmacology, General Medical Research Keywords: Target difference, clinically important difference, sample size, guidance, randomised trial, effect size, realistic difference
The aim of this document is to provide practical guidance on the choice of target difference used in the sample size calculation of a randomised controlled trial (RCT). Guidance is provided with a definitive trial, one that seeks to provide a useful answer, in mind and not those of a more exploratory nature. The term "target difference" is taken throughout to refer to the difference that is used in the sample size calculation (the one that the study formally "targets"). Please see the glossary for definitions and clarification with regards other relevant concepts. In order to address the specification of the target difference, it is appropriate, and to some degree necessary, to touch on related statistical aspects of conducting a sample size calculation. Generally the discussion of other aspects and more technical details is kept to a minimum, with more technical aspects covered in the appendices and referencing of relevant sources provided for further reading.The main body of this guidance assumes a standard RCT design is used; formally, this can be described as a two-arm parallel-group superiority trial. Most RCTs test for superiority of the interventions, that is, whether or not one of the interventions is superior to the other (See Box 1 for a formal definition of superiority, and of the two most common alternative approaches). Some common alternative trial designs are considered in Appendix 3. Additionally, it is assumed in the main body of the text that the conventional (Neyman-Pearson) approach to the sample size calculation of an RCT is being used. Other approaches (Bayesian, precision and value of information) are briefly considered in Appendix 2 with reference to the specification of the target difference.
Evaluating State-Level Prescription Drug Monitoring Program (PDMP) and Pill Mill Effects on Opioid Consumption in Pharmaceutical Supply Chain
Amirreza Sahebi Fakhrabad, Amir Hossein Sadeghi, Robert Handfield
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Opioid crisis; PDMP; Pill Mill; Difference-in-Difference; Policy Analysis; Pharmaceutical Supply Chain
The opioid crisis in the United States has had devastating effects on communities across the country, leading many states to pass legislation that limits the prescription of opioid medications in an effort to reduce the number of overdose deaths. This study evaluates the impact of two categories of PDMP and Pill Mill regulations on the supply of opioid prescriptions at the level of dispensers and distributors (excluding manufacturers) using ARCOS data. The study uses a difference-in-difference method with a two-way fixed design to analyze the data. The study finds that both of the regulations are associated with reductions in the volume of opioid distribution. However, the study reveals that these regulations may have unintended consequences, such as shifting the distribution of controlled substances to neighboring states. For example, in Tennessee, the implementation of Operational PDMP regulations reduces in-state distribution of opioid drugs by 3.36% (95% CI, 2.37 to 4.3), while the out-of-state distribution to Georgia, which did not have effective PDMP regulations in place, increases by 16.93% (95% CI, 16.42 to 17.44). Our studies emphasize that policymakers should consider the potential for unintended distribution shifts of opioid drugs to neighboring states with laxer regulations as well as varying impacts on different dispenser types.
Seeking Gender Difference in Code-Switching by Investigating Mandarin-English Child Bilingual in Singapore
Weihang Huang, Danqian Lyu, Jingping Lin
Subject: Arts & Humanities, Linguistics Keywords: Code-switching; Gender difference; Bilingualism
As a behavior of bilingual individuals and an indispensable part of bilingual speech, code-switching has been investigated by many researchers. However, there are many variables influencing code-switching, and each variable has the potential to be a confounding variable. Among these variables is the gender; however, whether there are significant gender differences and what are the gender differences in code-switching remains unknown for Mandarin Mandarin-English child bilinguals, as previous literature diverse on the existence of gender differences. Therefore, this paper seeks potential code-switching and distribution of code-switching by quantitative analysis of speech data in Singapore Bilingual Corpus. The results indicate that gender differences are significant in the amount of intra code-switching. However, neither considerable gender difference is observed in the amount of inter nor the code-switching related environment.
Oscillation of a Class of Third Order Generalized Functional Difference Equation
P.Venkata Mohan Reddy, Adem Kilicman, Maria Susai Manuel
Subject: Mathematics & Computer Science, Analysis Keywords: Generalized difference operator; Oscillation; Convergence.
The authors intend to establish new oscillation criteria for a class of generalized third order functional difference equation of the form \begin{equation}{\label{eq01}} \Delta_{\ell}\left(a_2(n)\left[\Delta_{\ell}\left(a_1(n)\left[\Delta_{\ell}z(n)\right]^{\beta_1}\right)\right]^{\beta_2}\right)+q(n)f(x(g(n)))=0, ~~n\geq n_0, \end{equation} where $z(n)=x(n)+p(n)x(\tau(n))$. We also present sufficient conditions for the solutions to converges to zero. Suitable examples are presented to validate our main results.
Monte Carlo Comparison for Nonparametric Threshold Estimators
Chaoyi Chen, Yiguo Sun
Subject: Social Sciences, Econometrics & Statistics Keywords: difference kernel estimator; integrated difference kernel estimator; M-estimation; Monte Carlo; nonparametric threshold regression
This paper compares the finite sample performance of three non-parametric threshold estimators via Monte Carlo method. Our results show that the finite sample performance of the three estimators is not robust to the relative position of the threshold level along the distribution of threshold variable, especially when a structural change occurs at the tail part of the distribution.
Quantum Electromagnetic Finite-Difference Time-Domain Solver
Dong-Yeop Na, Weng Cho Chew
Subject: Physical Sciences, Optics Keywords: Quantum Maxwell's equations; finite-difference time-domain
We employ another approach to quantize electromagnetic fields in the coordinate space, instead of the mode (or Fourier) space, such that local features of photons can be efficiently, physically, and more intuitively described. To do this, coordinate-ladder operators are defined from mode-ladder operators via the unitary transformation of systems involved in arbitrary inhomogeneous dielectric media. Then, one can expand electromagnetic field operators through the coordinate-ladder operators weighted by non-orthogonal and spatially-localized bases, which are propagators of initial quantum electromagnetic (complex-valued) field operators. Here, we call them QEM-CV-propagators. However, there are no general closed form solutions available for them. This inspires us to develop a quantum finite-difference time-domain (Q-FDTD) scheme to numerically time evolve QEM-CV-propagators. In order to check the validity of the proposed Q-FDTD scheme, we perform computer simulations to observe the Hong-Ou-Mandel effect resulting from the destructive interference of two photons in a 50/50 quantum beam splitter.
On The Dynamics of a System of Difference Equations xn+1 = xn-1 yn - 1, yn+1 = yn-1zn - 1, zn+1 = zn-1xn - 1
Erkan Taşdemir, Yüksel Soykan
Subject: Mathematics & Computer Science, General Mathematics Keywords: difference equations; dynamical systems; periodicity; stability; boundedness
In this paper, we study the dynamics of following system of nonlinear difference equations xn+1 = xn-1yn - 1, yn+1 = yn-1zn - 1, zn+1 = zn-1xn - 1. Especially we investigate the periodicity, boundedness and stability of related system of difference equations.
Stability and Periodic Nature of a System of Difference Equations
Erkan Taşdemir
Subject: Physical Sciences, Mathematical Physics Keywords: difference equations; equilibrium points; stability; periodicity; invariant
In this paper, we investigate the equilibrium points of following a system of difference equations xn+1 = xn 2yn−1, y n+1 = yn 2xn−1. We also study the asymptotic stability of related system of difference equations. Further we examine the periodic solutions of related system with period two. Additionally, we find out the invariant interval and periodic cycles of related system of difference equations.
Finite Difference Algorithm on Non-uniform Meshes for Modeling 2D Magnetotelluric Responses
Xiaozhong Tong, Yujun Guo, Wei Xie
Subject: Earth Sciences, Geophysics Keywords: finite-difference algorithm; magnetotelluric; 2D structures; modeling
A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the formulation of this scheme and gave some sights into its successful implementation, and compared finite-difference solution with known numerical results and simple analytical solutions. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared to the analytical solutions for the two-layered model, the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. In the end, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons confirm the validity and reliability of our finite-difference algorithm.
PAMPAS: A PsychoAcoustical Method for the Perceptual Analysis of Multidimensional Sonification
Tim Ziemer, Holger Schultheis
Subject: Physical Sciences, Acoustics Keywords: sonification evaluation; psychoacoustics; just noticeable difference; difference limen; discrimination threshold; comparison of sonification designs; maximum likelihood procedure; auditory display
The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date many researchers design their individual sonification from scratch. There are no standards in sonification design and evaluation. But researchers and practitioners have formulated several requirements and established several methods. There is wide consensus that psychoaocustics could play an important role in the sonification design and evaluation phase. But this requires an adaption of psychoacoustic methods to the signal types and the requirements of sonification. In this method paper we present PAMPAS, a PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification. A well-defined and well-established, efficient, reliable and replicable Just Noticeable Difference experiment using the Maximum Likelihood Procedure serves as the basis to achieve linearity of parameter mapping during the sonification design stage and to identify and quantify perceptual effects during the sonification evaluation stage, namely the perceptual resolution, hysteresis effects and perceptual interferences. The experiment results are universal scores from a standardized data space and a standardized procedure. These scores can serve to compare multiple sonification designs of a single researcher, or even between different research groups. The method can supplement other sonification design and evaluation methods from a perceptual viewpoint.
Approximate Solution of Elliptic Poission Equation with IBVP using Finite Difference Scheme
Vladimir Jaćimović
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Poissons equation; IBVP; Numerical Solution; Finite difference method
In this study, we have considered for numerical solution of a Poisson equation in a domain. we have presented a first and second order finite difference method for solving the system of the intial boundary value. We have presented the numerical solution of the Poisson equations in a two-dimensional finite region. Numerical Solution and exact solution for the presented method has been compared and the figures presented. The present error analysis tables demonstrate the efficiency of the method.
Preprint CONCEPT PAPER | doi:10.20944/preprints202012.0200.v1
Innate Perception of Risk: Probability Ratio or Difference?
Milind Watve, Harshada Vidwans Dubey, Rohini Kharate
Subject: Behavioral Sciences, Applied Psychology Keywords: risk assessment; odds ratio; hazard ratio; probability difference
In public health literature the risk of death or disease associated with a dietary, environmental of behavioral factor is most commonly denoted by odds ratio (OR), hazard ratio (HR) or risk ratio (RR). The ratio indices have several desirable statistical properties. However, the most important question is whether there are some evolved innate norms of perception of risk that people use and what they are. We conducted a simple one question survey of 98 individuals with different age, sex, educational and professional backgrounds. The respondents were asked to judge the relative perceived risk of four different hypothetical habits for which data on the percentage of people affected by the disease with and without the habit was given. They were asked to rank the risks for the four habits. Results showed that the habits that had the highest difference between probability of acquiring the disease were ranked high on risk perception. The probability ratios did not affect risk perception significantly. Further age, sex, profession or formal training in statistics did not affect the response significantly. Even individuals that were formally trained to use OR and HR as risk indicators, preferred using probability differences over ratios for judging their own risk in the perceived context. This preliminary inquiry into intuitive statistical perception suggests that designing statistical indices based on people's innate perception may be a better strategy than trying to train people to understand the indices designed by expert statisticians.
Dynamics of a Second-Order System of Nonlinear Difference Equations
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equation; stability; global stability; periodicity; eventually periodicity
In this paper, we investigate the equilibrium points, stability of two equilibrium points, convergences of negative equilibrium point, periodic solutions, and existence of bounded or unbounded solutions of a system of nonlinear difference equations xn+1 =xn-1yn - 1, yn+1 = yn-1xn - 1 n = 0,1,..., where the initial values are real numbers. Additionally we present some numerical examples to verify our theoretical results.
Association between Problematic Internet Use and Sleep Disturbance Among Adolescents: the Role of the Child's Sex
Jiewen Yang, Yangfeng Guo, Xueying Du, Yi Jiang, Wanxin Wang, Di Xiao, Tian Wang, Ciyong Lu, Lan Guo
Subject: Behavioral Sciences, Social Psychology Keywords: Problematic Internet use, sleep disturbance, sex difference, adolescents
The Internet use has become an integral part of daily life, adolescents are especially at a higher risk to develop problematic Internet use (PIU). Although one of the most well-known comorbid conditions of PIU is sleep disturbance, little is known about the sex disparity in this association. This school-based survey in students of grades 7-9 was conducted to estimate the prevalence of PIU and sleep disturbance among Chinese adolescents, to test the association between PIU and sleep disturbance, and to investigate the role of the child's sex in this association. A two-stage stratified cluster sampling method was used to recruit participants, and a two-level logistic regression models were fitted. The mean Internet addiction test scores was 37.2 (SD: 13.2), and 15.5% (736) met the criteria for PIU. After adjusting for control variables, problematic Internet users were at a higher risk of sleep disturbance (adjusted odds ratio=2.41, 95% CI=2.07-3.19). Sex-stratified analyses also demonstrated that association was greater in girls than boys. In this respect, paying more attention to the sleep patterns of adolescents who report excessive Internet use is recommended, and this early identification may be of practical importance for schools, parents, and adolescents themselves.
Klotho Regulated by Estrogen Plays a Key Role in Sex Differences in Stress Resilience
Zhinei Tan, Yongxia Li, Yinzheng Guan, Javed Iqbal, Chenyue Wang, Xinming Ma
Subject: Life Sciences, Molecular Biology Keywords: klotho; estrogen; hippocampus; chronic stress; sex difference; stress resilience
Klotho (KL) is a glycosyl hydrolase and aging-suppressor gene. Stress is a risk factor for depression and anxiety that are highly comorbid with each other. The aim of this study was to determine KL is regulated by estrogen and plays an important role in sex differences in stress resilience. Our results showed that KL was regulated by estrogen in rat hippocampal neurons in vivo and in vitro and was essential for estrogen-mediated increase in the number of presynaptic vesicular glutamate transporter 1 (Vglut1) positive clusters on the dendrites of hippocampal neurons. The role of KL in sex differences in stress responses was examined in rats using three-week chronic unpredictable mild stress (CUMS). CUMS produced a deficit in spatial learning and memory, anhedonic-like and anxiety-like behaviors in male but not female rats, which was accompanied by a reduction in KL protein levels in the hippocampus of male, but not female rats. This demonstrated the resilience of female rats to CUMS. Interestingly, knockdown of KL protein levels in the rat hippocampus of both sexes caused a decrease in stress resilience in both sexes, especially in female rats. These results suggest that regulation of KL by estrogen plays an important role in estrogen-mediated synapse formation, and KL plays a critical role in the sex differences in cognitive deficit, anhedonic-like and anxiety-like behaviors induced by chronic stress in rats, highlighting an important role of KL in sex differences in stress resilience.
UHPLC-Orbitrap-HRMS Identification of 51 Oleraceins (Cyclo-Dopa Amides) in Portulaca oleracea L. Cluster Analysis and MS2 Filtering by Mass Difference
Yulian Voynikov, Paraskev Nedialkov, Reneta Gevrenova, Dimitrina Zheleva-Dimitrova, Vessela Balabanova, Ivan Dimitrov
Subject: Chemistry, Analytical Chemistry Keywords: orbitrap; purslane; oleracein; diagnostic ion; diagnostic difference; clustering methods
Oleraceins are a class of indoline amide glycosides found in Portulaca oleracea L. (Portulacaceae), or purslane. These compounds are characterized with 5,6-dihydroxyindoline-2-carboxylic acid N-acylated with cinnamic acid derivatives, and many are glucosylated. Herein, hydromethanolic extracts of the aerial parts of purslane were subjected to UHPLC-Orbitrap-HRMS analysis, conducted in negative ionization mode. Diagnostic ion filtering (DIF), followed by diagnostic difference filtering (DDF), were utilized to automatically filter out MS data and select plausible oleracein structures. After an in-depth MS2 analysis, a total of 51 oleracein compounds were tentatively identified. Of them, 26 had structure matching one of already known oleraceins and the other 25 are new, undescribed in the literature structures, belonging to the oleracein class. Moreover, diagnostic fragment ions were selected, based on which clustering algorithms and visualizations were employed. As we demonstrate, clustering methods can provide valuable insights into the mass fragmentation elucidation of natural compounds in complex mixtures.
Analysis of Non-Symmetrical Heat Removal during Cast-ing of Steel Billets and Slabs
Adán Ramirez-Lopez, Omar Davila-Maldonado, Alfronso Nájera-Bastida, Rodolfo Morales, Jafeth Rodríguez-Ávila, Carlos Rodrigo Muñiz-Valdés
Subject: Materials Science, Biomaterials Keywords: Heat removal; Finite difference method; Computer simulation; Continuous casting
Steel is one of the essential materials in the world's civilization. It is essential to produce many products such as pipelines, mechanical elements in machines, vehicles, profiles, and beam sections for buildings in many industries. Until the '50s of the 20th century, steel products required a complex process known as ingot casting; for years, steelmakers focused on developing and simplifying this process. The result was the con-tinuous casting process (CCP); it is the most productive method to produce steel. The CCP allows producing significant volumes of steel sections without interruption and is more productive than the formal ingot casting process. The CCP begins by transferring the liquid steel from the steel-ladle to a tundish. This tundish or vessel distributes the liquid steel, by flowing through its volume, to one or more strands having wa-ter-cooled copper molds. The mold is the primary cooling system, PCS, solidifying a steel shell to withstand a liquid core and its friction forces with the mold wall. Further down the mold, the rolls drive the steel section in the SCS. Here the steel section is cooled, solidifying the remaining liquid core, by sprays placed in every cooling segment all around the billet and along the curved section of the machine. Finally, the steel strand goes towards a horizontal-straight free-spray zone, losing heat by radiation mechanism, where the billet cools down further to total solidification. A moving torch cutting-scissor splits the billet to the desired length at the end of this heat-radiant zone.
Working Paper TECHNICAL NOTE
Report on Results of the Students' Project about Crank-Nicolson Method for Advection Equation
Peter Frolkovič, Kristián Balaj, Matej Holý, Katarína Juhásová, Andrej Petričko, Adam Štuller
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: advection, Crank-Nicoslon, finite difference method, upwind, numerical solution
Online: 6 July 2021 (12:08:22 CEST)
Our aim is to implement and test some less known numerical methods that might be used to solve an advection equation. The description is restricted to information necessary to implement and test the presented methods and it is given only for one-dimensional case.
How Would Different Types of Negative Life Events Predict Adolescents' Suicidal Ideation ? - an Empirical Study Based on the Western Region of China
Moye Xin, Xueyan Yang, Kun Liu
Subject: Social Sciences, Accounting Keywords: Negative life events; Suicidal ideation; Suicidalogy; Adolescents; Gender difference
Online: 4 March 2021 (15:56:04 CET)
Background: We attempted to find if there were gender differences in different types of Negative life events and Suicidal ideation among Chinese adolescents, then analyze the relationship between different types of Negative life events and Suicidal ideation among these young students. Methods: Based on the data from 6 middle-schools and 3 universities in 3 cities of Western China, the gender difference in different types of Negative life events and Suicidal ideation and their related factors were investigated and analyzed in the study. Results: Gender differences were found during different types of Negative life events and Suicidal ideation; Negative life events could predict the intensity of Suicidal ideation by gender, to some specific types. Conclusions: Negative life events were proved to be risk factors of adolescents' Suicidal ideation regardless of different gender stereotypes, but the specific classification of negative life events which had significant impact on adolescents' Suicidal ideation also indicated significant gender divisions. For males, negative life events of punishment and adaptation had a significant and boosting impact on their Suicidal ideation, the higher the scores of punishment and adaptation negative life events had, the greater intensity of male adolescents were to have Suicidal ideation. Thus, the above two types of negative life events may be the main stressors predicting male adolescents' Suicidal ideation; For females, in addition to punishment, other types of negative life events all had significant impacts on their Suicidal ideation, which can be treated as the main stressors to trigger female adolescents' Suicidal ideation; Additionally, parents' marital status of remarriage and divorce were proved to be significant indicators to adolescents' Suicidal ideation, the age variable was proved to be strongly correlated with Suicidal ideation among female adolescents.
Determination of the Earthquake Epicenter from Line-of-Sight Displacement Images Obtained by Sentinel-1A/B Radar Data Using the GMTSAR Software Package
Asset Akhmadiya, Khuralay Moldamurat, Nabi Nabiyev, Aigerim Kismanova
Subject: Earth Sciences, Atmospheric Science Keywords: displacement; radar image processing; phase difference; interferometry; earthquake epicenter
This article described the technology of determining earthquake epicenter with radar remote sensing on the example of Sentinel-1A/B. To determine the epicenter of the earthquake, the Earth's crust displacements were analyzed using radar remote sensing data obtained for the ascending and descending flight orbits. Coordinates of Earthquake epicenters were found according to line-of-sight displacement images via its maximum value. Displacement of the Earth's crust was obtained by processing in the GMTSAR package in the VirtualBox virtual machine of the Linux Ubuntu 16.04 operation system. Two earthquakes that occurred in 2020 were studied to determine the accuracy of finding epicenters using the ascending and descending orbits Sentinel-1A/B. These earthquakes occurred in Western Xizang, China, and Doganyol, Turkey. The maximum deviation from the officially registered epicenter coordinates was 15.38 km for Doganyol and 3.2 km for the Western Xizang earthquake. The negative displacement was 90 mm for Doganyol and 50 mm for Western Xizang.
Global Dynamics of a Higher Order Difference Equation with a Quadratic Term
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations; global stability; rate of convergence; boundedness; periodicity; semicycle
In this paper, we investigate the dynamics of following higher order difference equation x_{n+1}=A+B((x_{n})/(x_{n-m}²)) with A,B and initial conditions are positive numbers, and m∈{2,3,⋯}. Especially we study the boundedness, periodicity, semi-cycles, global asymptotically stability and rate of convergence of solutions of related higher order difference equations.
The Effects of Age, Gender, and Control Device in a Virtual-Reality Driving Simulation
Wen-Te Chang
Subject: Arts & Humanities, Philosophy Keywords: VR; aging effect; gender difference; control device; wayfinding strategy
The application of Virtual Reality in a driving simulation is not novel, yet little is known about the use of this technology by senior populations. The effects of age, sex, control device (joystick or handlebar), and task type on wayfinding proficiency using a virtual reality (VR) driving simulation were explored. The driving experimental model involved 96 randomly recruited participants, including 48 young people and 48 seniors (split evenly by gender in each group). The experimental results and statistical analyses indicate that in a VR driving scenario task type significantly affected VR driving performance. Navigational scores were significantly higher for the straight (easy) task than for the curved (difficult) task. The aging effect was the main reason for significant and interacting effects of sex and control device. It was found that interactions between age and sex difference indicated that the young group exhibited better wayfinding performance than the senior group, and in the young group males had better performance than females. Similarly, interactions between age and control device indicated that the handlebar control device type resulted in better performance than the joystick device in the young group, but no difference was found in the senior group due to age or learning effects. Findings provide an understanding of the evaluation of the interface designs of navigational support systems, taking into consideration any effects of age, sex, control device, and task type within three-dimensional VR games and driving systems. With a VR driving simulator, seniors can test drive inaccessible products, such as electric bicycles or cars, using a computer at home.
Modeling Potential C, N, H Content in Aboveground Biomass with Spectral Data from Sentinel 2a
Neftalí Reyes-Zurita, Joaquín A. Rincón-Ramírez, Gerardo Rodríguez-Ortiz, José R. Enríquez-del Valle, Vicente A. Velasco-Velasco, Ernesto Castañeda-Hidalgo
Subject: Biology, Forestry Keywords: Normalized difference vegetation index; San Juan Lachao; Satelital image
Nutrient estimation in forest ecosystems through satellite images allows us to obtain accurate data, starting with data transformation from forest stands and the existing relationship with the spectral information of the image through modeling. The objective of the study was to quantify and validate the content of C, N, H in aboveground tree biomass in managed stands using spatial modeling and satellite images. This study was conducted during 2017-2018 in managed forest stands in San Juan Lachao, Oaxaca, Mexico. Fifteen 400 m2 experimental sites were selectively established, using a completely randomized experimental design of five silvicultural treatments with three replications. As part of data preprocessing, normality and homogeneity of variances assumptions were checked using the Shapiro-Wilk and Bartlett tests, respectively. From the pixels, data of the average of Normalized Difference Vegetation Index (NDVI) that surrounded the sampling sites were contrasted against the data obtained from forest inventory and the regression models to estimate C, N, H and biomass were generated. Models were validated by NDVI. With the models we estimated 0.95 t ha-1 biomass, which contains between 0.61 and 0.63 of C, 0.44-0.46 of N and 0.24 of H. The models generated had coefficients of determination (R2) of 0.85 to 0.87, which are significant parameters (p ≤ 0.0001). These results confirm that the use of Sentinel satellite images in the estimation of these elements in forest ecosystems based on the relationship between data inventory and the NDVI is highly reliable.
Uniqueness of Functions with Its Shifts or Difference Operators
Rajshree Dhar
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Meromorphic function, Shared values, Nevanlinna theory, Shifts, Difference Operator
It is shown that if a non-constant meromorphic function f(z) is of finite order and shares certain values with its shifts/difference operators then f(z) coincides with that particular shift/difference operator.
The Effect of Temporal Gradient of Vegetation Indices on Early-Season Wheat Area Estimation Using Random Forest Classification
Mousa Saei Jamal Abad, Ali A. Abkar, Barat Mojaradi
Subject: Earth Sciences, Geoinformatics Keywords: wheat classification; random forest; spectral gradient difference; vegetation indices
The early-season area estimation of the winter wheat crop as a strategic product is important for decision makers. Classification of multi-temporal images is an approach which is affected by many factors like appropriate training sample size, proper frequency and acquisition times, vegetation indices (VIs) type, temporal gradient of spectral bands and VIs, appropriate classifier and missed values because of cloudy conditions. This paper addresses the impact of appropriate frequency and acquisition times and VIs type along with the spectral and VI gradient on random forest (RF) classifier when missed values exist in multi-temporal images. To investigate the appropriate temporal resolution for image acquisition, the study area was selected on an overlapping area between two LDCM paths. In our developed method, the miss values of cloudy bands for each pixel are retrieved by the mean of k-nearest ordinary pixels. Then the multi-temporal image analysis is performed by considering different scenarios provided by decision makers in terms of desired crop types that should be extracted at early-season in the study areas. The classification results obtained by the RF decrease by 1.6% when temporally missed values retrieved by the proposed method, which is an acceptable result. Moreover, the experimental results demonstrated that if temporal resolution of Landsat 8 increased to one week the classification task can be conducted earlier with almost better results in terms of OA and kappa. The impact of incorporating VIs along with the temporal gradients of spectral bands and VIs as new features in RF demonstrated that the OA and Kappa are improved 3.1% and 6.6%, respectively. Furthermore, the obtained result showed that if only one image from seasonal changes of crops is available, the temporal gradient of VIs and spectral bands play the main role to discriminate remarkably wheat from barley. The experiments also demonstrated that if both wheat and barley merge to a single class the crop area can be estimated two months earlier with 97.1 and 93.5 in terms of OA and kappa, respectively.
Estimating Flooding at River Spree Floodplain Using HEC-RAS Simulation
Munshi Md Shafwat Yazdan, Md Tanvir Ahad, Raaghul Kumar, Md Abdullah Al Mehedi
Subject: Engineering, Civil Engineering Keywords: 2D floodplain modeling; HEC-RAS; River Renaturation; finite difference approximation
River renaturation can be an effective management method for restoring the floodplain's natural capacity and minimizing the effects during high flow periods. A 1D-2D HEC-RAS model, in which the flood plain was considered as 2D and the main channel as 1D, was used to simulate flooding in the restored reach of the Spree River. When computing in this model, finite volume and finite difference approximations using the Preissmann approach are used for the 1D and 2D models, respectively. To comprehend the sensitivity of the parameters and model, several scenarios were simulated using different time steps and grid sizes. Additionally, dykes, dredging, and changes to the vegetation pattern have been used to simulate flood mitigation measures. The model predicted that flooding would occur mostly in the downstream portion of the channel in the majority of the scenarios without mitigation measures, whereas with mitigation measures, flooding in the floodplain would be greatly reduced. By preserving the natural balance on the channel's floodplain, the restored area needs to be kept in good condition. Therefore, mitigating measures that balance the area's economic and environmental aspects must be considered in light of the potential for floods.
Are There Sex Differences in Self-Reported Childhood Maltreatment in Major Depressive and Bipolar Disorders? A Retrospective Cross-Sectional Study
Daniela Caldirola, Tatiana Torti, Francesco Cuniberti, Silvia Daccò, Alessandra Alciati, Koen Schruers, Giovanni Martinotti, Domenico De Berardis, Giampaolo Perna
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: childhood trauma; major depressive disorder; bipolar disorder; sex difference; age
Background. We investigated, for the first time, whether there are any sex differences in retrospective self-reported childhood maltreatment (CM) in Italian adult patients with major depressive disorder (MDD) or bipolar disorder (BD). Furthermore, the potential impacts of patients' age on the CM self-report was investigated. Methods. This retrospective, cross-sectional study used the data documented in the electronic medical records of patients who were hospitalized for a 4-week psychiatric rehabilitation program. The CM was assessed using the 28-item Childhood Trauma Questionnaire (CTQ), which evaluates emotional, physical, and sexual abuse, as well as emotional and physical neglect. The linear and logistic regression models were used (α = 0.01). Results. Three hundred thirty five patients with MDD (255 women and 80 men) and 168 with BD (97 women and 71 men) were included. In both samples, considerable CM rates were identified, but no statistically significant sex differences were detected in the variety of CTQ-based CM aspects. There was a significant association, with no sex differences, between the increasing patients' age and a decreasing burden of CM. Conclusion. Both women and men with MDD or BD experienced a similar and considerable CM burden. Our findings support the routine CM assessment in psychiatric clinical practice.
Experimental Study on the Heat Transfer Performance of Pump-assisted Capillary Phase-change Loop
Xiaoping Yang, Gaoxiang Wang, Cancan Zhang, Jie Liu, Jinjia Wei
Subject: Engineering, Energy & Fuel Technology Keywords: liquid cooling; phase-change loop; pressure difference; heat transfer enhancement
To overcome the two-phase flow instability of traditional boiling heat dissipation technologies, a porous wick was used for liquid-vapor isolation, thus realizing efficient and stable boiling heat dissipation. A pump-assisted capillary phase-change loop with methanol as working medium was established to study the effect of liquid-vapor pressure difference and heating power on its start-up and steady-state characteristics. The results indicated that the evaporator undergoes four heat transfer modes including flooded, partial flooded, thin film evaporation and overheating. The thin film evaporation mode was the most efficient one with the shortest start-up period. Besides, the heat transfer modes were determined by liquid-vapor pressure difference and power. The heat transfer coefficient could be significantly improved and the thermal resistance could be reduced by increasing liquid-vapor pressure difference as long as it did not exceed 8 kPa. However, when the liquid-vapor pressure difference exceeded 8kPa, its influence on the heat transfer coefficient weakened. In addition, a two-dimensional heat transfer mode distribution diagram considering both liquid-vapor pressure difference and power was drawn through a great number of experiments. During engineering application, the liquid-vapor pressure difference can be controlled to maintain efficient thin film evaporation in order to achieve the optimum heat dissipation effect.
Propagation and Transformation of Vortexes in Linear and Nonlinear Radio-photon Systems
Valery H. Bagmanov, Albert Kh. Sultanov, Ivan K. Meshkov, Azat R. Gizatulin, Raoul R. Nigmatullin, Airat Zh. Sakhabutdinov
Subject: Physical Sciences, Optics Keywords: vortex propagation; difference frequency generation; nonlinear medium; vortex beams conversion
The article is devoted to issues related to the propagation and transformation of vortexes in the optical range of frequency. Within the framework of the traditional and modified model of slowly varying envelope approximation (SVEA), the process of converting vortex beams of the optical domain into vortex beams of the terahertz radio range based on nonlinear generation of a difference frequency in a medium with a second-order susceptibility is considered. The modified SVEA splits a slowly varying amplitude into two factors, which makes it possible to more accurately describe the three-wave mixing process. The theoretical substantiation of the rule of vortex beams topological charges conversion is given – the topological charge of the output radio-vortex beam is equal to the difference between the topological charges of the input optical vortex beams. A numerical simulation model of the processes under consideration has been implemented and analyzed.
Dynamics of a System of Higher Order Difference Equations with Quadratic Terms
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Difference equations; global asymptotic stability; boundedness; rate of convergence; oscillation
In this paper we investigate the global asymptotic stability of following system ofhigher order difference equations with quadratic terms:xn+1=A+Byn/yn−m^2, yn+1=A+Bxn/xn−m^2, where A and B are positive numbers and the initial values are positive numbers.We also study the boundedness, rate of convergence and oscillation behaviour of thesolutions of related system.
Biofilm Early Stages of Growth and Accumulation Theoretical Model
José Carvalho, Manuel Carrondo, Luis Bonilla
Subject: Engineering, Automotive Engineering Keywords: biofilm; Miller recurrent algorithm; Bessel functions; differential-difference master equations
A theoretical model to translate the evolution over time, in early stages, of growth and accumulation of biofilm bacterial mass is introduced. The model implies the solution of a system of differential-difference master equations. The application of an algorithm like Miller´s tree term recurrence, already known for Bessel functions of first kind, allows an exact calculation of the solutions of such equations, for a wide range of parameters values and time. For biofilm model a five term recurrence is deduced and applied in a backwards computation. A suitable normalisation condition completes the reach of the solution.
Dynamics of System of Higher Order Difference Equations with Quadratic Terms
This paper aims to investigate the global asymptotic stability of following system of higher order difference equations with quadratic terms: x_{n+1}=A+B((y_{n})/(y_{n-m}²)),y_{n+1}=A+B((x_{n})/(x_{n-m}²)) where A and B are positive numbers and the initial values are positive numbers. We also study the rate of convergence and oscillation behaviour of the solutions of related system.
Second Harmonic Generation From Phase-Engineered Metasurfaces of Nanoprisms
Kannta Mochizuki, Mako Sugiura, Hirofumi Yogo, Stefan Lundgaard, Jingwen Hu, Soon Hock Ng, Yoshiaki Nishijima, Saulius Juodkazis, Atsushi Sugita
Subject: Physical Sciences, Optics Keywords: metasurfaces, second harmonic generation, phase control, finite difference time domain
Metasurfaces of gold (Au) nanoparticles on a SiO2-Si substrate were fabricated for the enhancement of second harmonic generation (SHG) using electron beam lithography and lift-off. Triangular Au nanoprisms which are non-centro-symmetric and support the second- order non-linearity were examined for SHG. The thickness of the SiO2 spacer is shown to be an efficient parameter to spectrally tune to maximise SHG. Electrical field enhancement at the fundamental wavelength was shown to define the intensity of the second harmonics. Numerical modeling of light enhancement was verified by experimental measurements of SHG and reflectivity spectra at the normal incidence. At the plasmonic resonance, SHG is enhanced up to ∼3.5×103 times for the optimised conditions.
Dynamics of the Rational Difference Equation $x_{n+1}=px_{n}+\frac{q}{x_{n-1}^2}$
Sk Sarif Hassan
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: rational difference equation; asymptotic stability; periodic solutions; chaos and fractal
A second order rational difference equation $$x_{n+1}=px_{n}+\frac{q}{x_{n-1}^2}$$ with the parameters $p$ and $q$ which lies in $(0,1)$, is studied. The dynamics of the equilibrium is characterized through the trichotomy of the parameter $p<\frac{1}{2}$, $p=\frac{1}{2}$ and $p>\frac{1}{2}$. It is found that there is no periodic solution of period $2$ and $3$ but there exists periodic solutions with only periodic solution $5$ and $10$ are achieved computationally.
On the Solutions of Four Rational Difference Equations Associated to Tribonacci Numbers
İnci Okumuş, Yüksel Soykan
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations, solution, equilibrium point, tribonacci number, global asymptotic stability
In this study, we investigate the form of solutions, stability character and asymptotic behavior of the following four rational difference equations x_{n+1} = (1/(x_{n}(x_{n-1}±1)±1)), x_{n+1} = ((-1)/(x_{n}(x_{n-1}±1)∓1)), such that their solutions are associated with Tribonacci numbers.
Discrete Maximum Principle and Energy Stability of Compact Difference Scheme for the Allen-Cahn Equation
Dan Tian, Yuanfeng Jin, Gang Lv
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Allen-Cahn equation; compact difference scheme; maximum principle; energy stability
In the paper, a fully discrete compact difference scheme with $O(\tau^{2}+h^{4})$ precision is established by considering the numerical approximation of the one-dimensional Allen-Cahn equation. The numerical solutions satisfy discrete maximum principle under reasonable step ratio and time step constraint is proved. And the energy stability for the fully discrete scheme is investigated. An example is finally presented to show the effectiveness of scheme.
Fierce Heat and Players' Health: Examining the View on Japan High School Baseball
Eiji Yamamura
Subject: Social Sciences, Economics Keywords: high school baseball; health; heatwave; heatstroke; sustainability; environment; gender difference; Japan
A summer high school baseball tournament is held every mid-summer in Koshien Stadium. "Koshien Baseball" is very popular in Japan; however, it faces the problem of extremely high temperatures during games. Thus, high school players are threatened by the harsh environment. For this reason, Internet surveys were conducted twice to purposefully engage the same individuals. Then, information on their views regarding the Koshien tournament before and after the provision of information regarding environmental change in Japan was gathered. Using data, this study examined how their views changed after having the information. Compared with the view before, it was found that (1) respondents were more likely to agree that the management rule of the Koshien tournaments should be altered to protect player's health, and (2) the impact of providing information is larger for female respondents, young people, and highly educated respondents.
On the Global Asymptotic Stability of A Two Dimensional System of Difference Equations with Quadratic Terms
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: difference equations; dynamical systems; global stability; rate of convergence; boundedness; oscillation
In this paper, we study the global asymptotically stability of following system of difference equations with quadratic terms: x_{n+1}=A+B((y_{n})/(y_{n-1}²)),y_{n+1}=A+B((x_{n})/(x_{n-1}²)) where A and B are positive numbers and the initial values are positive numbers. We also investigate the rate of convergence and oscillation behaviour of the solutions of related system.
Comparison of Smallest Eigenvalues for Nabla Fractional Boundary Value Problems
Jagan Mohan Jonnalagadda
Subject: Mathematics & Computer Science, Analysis Keywords: nabla fractional difference; boundary value problem; cone; u0- positive operator; eigenvalue
In this article, we establish the existence of and then compare smallest eigenvalues for nabla fractional boundary value problems involving a fractional difference boundary condition, using the theory of u0-positive operators with respect to a cone in a Banach space.
Wittgenstein and Derrida on the Possibility of Meaning: Hierarchy or Non-Hierarchy, Simple or Non-simple Origin, Deferral or Non-Deferral
Neil B MacDonald
Subject: Arts & Humanities, Philosophy Keywords: Wittgenstein; Derrida; meaning; hierarchy; deferral; learnability; teachability; différance; origin; identity; difference
Meaning understood in terms of teachability and learnability is crucial to Wittgenstein's later work. As regards the resolution of philosophical problems – and epistemological problems in particular - this approach seems to posit a hierarchy of meaning that excludes endless deferral. This is the basis of Wittgenstein's attack on philosophical scepticism. Derrida's approach to language seems to require both non-hierarchy and endless deferral. Consequently fundamental to his concept of origin is identity and difference simultaneously, irreducibly, non-simply. One question is whether it is possible for there to be a compromise between the two philosophers: a hierarchy of meaning that does not in principle exclude endless deferral.
Growth of the Entire or Meromorphic Solutions of Differential-Difference Equations
Subject: Keywords: entire and meromorphic functions; differential-difference polynomial; shared value; Nevanlinna theory
In this paper, we study the entire or meromorphic solutions for differential-difference equations in f(z) , its shifts, its derivatives and derivatives of its shifts. and study some Hayman's results for differential-difference polynomials.
Asymptotics and Confluence for a Singular Nonlinear Q-Difference-Differential Cauchy Problem
Stephane Malek
Subject: Mathematics & Computer Science, Analysis Keywords: asymptotic expansion; confluence; formal power series; partial differential equation; q-difference equation
We examine a family of nonlinear q-difference-differential Cauchy problems obtained as a coupling of linear Cauchy problems containing dilation q-difference operators, recently investigated by the author, and quasi-linear Kowalevski type problems that involve contraction q-difference operators. We build up local holomorphic solutions to these problems. Two aspects of these solutions are explored. One facet deals with asymptotic expansions in the complex time variable for which a mixed type Gevrey and q-Gevrey structure is exhibited. The other feature concerns the problem of confluence of these solutions as q tends to 1.
SmartScan: An Intelligent Online Scan Sequence Optimization Approach for Uniform Thermal Distribution, Reduced Residual Stresses and Deformations in PBF Additive Manufacturing
Keval S. Ramani, Chuan He, Yueh-Lin Tsai, Chinedum E. Okwudire
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: 3D printing; scanning strategy; finite difference method; radial basis functions; optimal control.
Parts produced by laser or electron-beam powder bed fusion (PBF) additive manufacturing are prone to residual stresses, deformations, and other defects linked to non-uniform temperature distribution during the manufacturing process. Several researchers have highlighted the important role scan sequence plays in achieving uniform temperature distribution in PBF. However, scan sequence continues to be determined offline based on trial-and-error or heuristics, which are neither optimal nor generalizable. To address these weaknesses, we have articulated a vision for an intelligent online scan sequence optimization approach to achieve uniform temperature distribution, hence reduced residual stresses and deformations, in PBF using physics-based and data-driven thermal models. This paper proposes SmartScan, our first attempt towards achieving our vision using a simplified physics-based thermal model. The conduction and convection dynamics of a single layer of the PBF process are modeled using the finite difference method and radial basis functions. Using the model, the next best feature (e.g., stripe or island) that minimizes a thermal uniformity metric is found using control theory. Simulations and experiments involving laser marking of a stainless steel plate are used to demonstrate the effectiveness of SmartScan in comparison to existing heuristic scan sequences for stripe and island scan patterns. In experiments, SmartScan yields up to 43% improvement in average thermal uniformity and 47% reduction in deformations (i.e., warpage) compared to existing heuristic approaches. It is also shown to be robust, and computationally efficient enough for online implementation.
Convection – Diffusion – Radiation Heat and Mass Transfer to a Sphere Accompanied by a Surface Exothermal Chemical Reaction
Gheorghe Juncu
Subject: Engineering, Biomedical & Chemical Engineering Keywords: heat transfer; mass transfer; convection-radiation; surface reaction; diffusion approximation; finite difference.
The steady-state, coupled heat and mass transfer from a fluid flow to a sphere accompanied by an exothermal catalytic chemical reaction on the surface of the sphere is analysed taking into consideration the effect of thermal radiation. The flow past the sphere is considered steady, laminar and incompressible. The radiative transfer is modeled by P0 and P1 approximations. The mathematical model equations were discretized by the finite difference method. The discrete equations were solved by the defect correction – multigrid method. The influence of thermal radiation on the sphere surface temperature, concentration and reaction rate was analysed for three parameter sets of the dimensionless reaction parameters. The numerical results show that only for very small values of the Prater number the effect of thermal radiation on the surface reaction is not significant.
Body Mass Index and Birth Weight Improve Polygenic Risk Score for Type 2 Diabetes
Avigail Moldovan, Yedael Y. Waldman, Nadav Brandes, Michal Linial
Subject: Medicine & Pharmacology, Allergology Keywords: Body weight; Genetic variations; GWAS; Metabolic disease; Obesity; Sex difference; UK-Biobank
One of the major challenges in the post-genomic era is elucidating the genetic basis of human diseases. In recent years, studies have shown that polygenic risk scores (PRS), based on aggregated information from millions of variants across the human genome, can estimate individual risk for common diseases. In practice, the current medical practice still predominantly relies on physiological and clinical indicators to assess personal disease risk. For example, caregivers mark individuals with high body mass index (BMI) as having an increased risk to develop type 2 diabetes (T2D). An important question is whether combining PRS with clinical metrics can increase the power of disease prediction in particular from early life. In this work we examined this question, focusing on T2D. We show that an integrated approach combining adult BMI and PRS achieves considerably better prediction than each of the measures on unrelated Caucasians in the UK Biobank (UKB, n=290,584). Likewise, integrating PRS with self-reports on birth weight (n=172,239) and comparative body size at age ten (n=287,203) also substantially enhance prediction as compared to each of its components. While the integration of PRS with BMI achieved better results as compared to the other measurements, the latter are early-life measurements that can be integrated already at childhood, to allow preemptive intervention for those at high risk to develop T2D. Our integrated approach can be easily generalized to other diseases, with the relevant early-life measurements.
Asymptotics and Confluence for Some Linear q-Difference-Differential Cauchy Problem
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: asymptotic expansion; confluence; formal power series; partial differential equation; q-difference equation
A linear Cauchy problem with polynomial coefficients wich combines q-difference operators for q>1 and differential operators of irregular type is examined. A finite set of sectorial holomorphic solutions w.r.t the complex time is constructed by means of classical Laplace transforms. These functions share a common asymptotic expansion in the time variable which turns out to carry a double layers structure which couples q-Gevrey and Gevrey bounds. In the last part of the work, the problem of confluence of these solutions as q tends to 1 is investigated.
On the Solutions of Four Second-Order Nonlinear Difference Equations
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations, form of solutions, equilibrium point, tribonacci number, global asymptotic stability.
This paper deals with the form, the stability character, the periodicity and the global behavior of solutions of the following four rational difference equations x_{n+1} = ((±1)/(x_{n}(x_{n-1}±1)-1)) x_{n+1} = ((±1)/(x_{n}(x_{n-1}∓1)+1)).
Improving Yield Mapping Accuracy Using Remote Sensing
Rodrigo Gonçalves Trevisan, Luciano Shozo Shiratsuchi, David S. Bullock, Nicolas Federico Martin
Subject: Biology, Agricultural Sciences & Agronomy Keywords: on-farm precision experimentation; normalized difference vegetation index; data filtering; error correction
The objective of this work was to investigate the use of remotely sensed vegetation indices to improve the quality of yield maps. The method was applied to the yield data of twelve cornfields from the Data Intensive Farm Management project. The results revealed the need to time shift the yield values up to three seconds to better match the sensor readings with the geographic coordinates. The residuals of the yield prediction model were used to identify points with unlikely yield values for that location, as an alternative to traditional approaches using local spatial statistics, without any assumption of spatial dependence or stationarity. The temporal and spatial distribution of the standardized coefficients for each experimental unit highlighted the presence of trends in the data. At least five out of the twelve fields presented trends that could have been induced by data collection.
D3 Dihedral Logistic Map of Fractional Order
Marius-F. Danca, Nikolay Kuznetsov
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Discrete fractional-order system; Caputo delta fractional difference; Hidden attractor; Dihedral symmetry D3
In this paper the D 3 dihedral logistic map of fractional order is introduced. The map 1 presents a dihedral symmetry D 3 . It is numerically shown that the construction and interpretation 2 of the bifurcation diagram versus the fractional order require special attention. The system stability 3 is determined and the problem of hidden attractors is analyzed. Also, analytical and numerical 4 results show that the chaotic attractor of integer order, with D 3 symmetries, looses its symmetry 5 in the fractional-order variant.
Class 1 Heating Cycles: A New Class of Thermodynamic Cycles
Hong-Rui Li, Hua-Yu Li
Subject: Engineering, Energy & Fuel Technology Keywords: heating cycles; thermodynamic cycles; thermodynamics; temperature difference utilization; heating; cooling; cogeneration; thermal science
Thermodynamic cycles are not only the core concepts of thermal science, but also key approaches to energy conversion and utilization. So far, power cycles and refrigeration cycles have been the only two general classes of thermodynamic cycles. While diverse types of systems have been developed to perform thermodynamic cycles, no new general classes of thermodynamic cycles have been proposed. Based on the basic principles of thermodynamics, here we propose and analyze a new general class of thermodynamic cycles named class 1 heating cycles (HC-1s). Two basic forms of HC-1s are obtained by connecting six essential thermodynamic processes in the proper order and forming a thermodynamic cycle. HC-1s present the simplest and most general approach to utilizing the temperature difference between a high-temperature heat source and a medium-temperature heat sink to achieve efficient medium-temperature heating and/or low-temperature cooling. HC-1s fill the gaps that have existed since the origin of thermal science, and they will play significant roles in energy conservation and emission reduction.
Does Sex Dimorphism Exist in Dysfunctional Movement Patterns During the Sensitive Period of Adolescence?
Josip Karuc, Mario Jelčić, Maroje Sorić, Marjeta Mišigoj-Duraković, Goran Markovic
Subject: Medicine & Pharmacology, Allergology Keywords: FMSTM; functional movement screen; pubescence; maturation; fundamental movement patterns; functional movement; gender difference
This study aimed to investigate sex differences in the functional movement in the adolescent period. Seven hundred and thirty adolescents (365 boys) aged 16–17 years participated in the study. The participants performed standardized Functional Movement Screen™ (FMS™) protocol and a t-test was used to examine sex differences in the total functional movement screen score while the chi-square test was used to determine sex differences in the proportion of dysfunctional movement and movement asymmetries within the individual FMS tests. Girls demonstrated a higher total FMS™ score compared to boys (12.7 ± 2.3 and 12.2 ± 2.4, respectively; F=8.26, p=0.0054). Also, sex differences were present in several individual functional movement patterns where boys demonstrated a higher prevalence of dysfunctional movement compared to girls in patterns that challenge mobility and flexibility of the body, while girls underperformed in tests that have higher demands for upper-body strength and abdominal stabilization. Findings of this study suggest that sex dimorphism exists in functional movement patterns in the period of mid-adolescence. The results of this research need to be considered while using FMS™ as a screening tool as well as the reference standard for exercise intervention among the secondary school-aged population.
Study of Transmission Dynamics of Novel COVID-19 by Using Mathematical Model
Thabet Abdeljawad
Subject: Keywords: Mathematical model; Novel coronavirus -19; Nonstandard finite difference scheme; Emigration rate. 1. Introduction
In this research work, we present a mathematical model for novel coronavirus -19 (NCOVID-19) which is consisted on three different compartments susceptible, infected and recovered classes abbreviated as under convex incident rate involving and emigration rate. We first derive the formulation of the model. Also, we give some qualitative aspects for the model including existence of equilibriums and its stability results by using various tools of nonlinear analysis. Then by mean of nonstandard finite difference scheme (NSFD), we simulate the results against the data of Wuhan city for the sixty days. By means of simulation, we show how protection, exposure, emigration, death and cure rates affect the susceptible, infected and recovered population with the passage of time involving emigration. On the basis of simulation, we observe the dynamical behavior due to emigration of susceptible and infected classes or one of these two.
Asymptotic Dynamics of a Class of Third Order Rational Difference Equations
Sk Sarif Hassan, Soma Mondal, Swagata Mandal, Chumki Sau
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: rational difference equations; local asymptotic stability; periodic; Quasi-Periodic and Fractal-like trajectory
The asymptotic dynamics of the classes of rational difference equations (RDEs) of third order defined over the positive real-line as $$\displaystyle{x_{n+1}=\frac{x_{n}}{ax_n+bx_{n-1}+cx_{n-2}}}, \displaystyle{x_{n+1}=\frac{x_{n-1}}{ax_n+bx_{n-1}+cx_{n-2}}}, \displaystyle{x_{n+1}=\frac{x_{n-2}}{ax_n+bx_{n-1}+cx_{n-2}}}$$ and $$\displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n}}}, \displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n-1}}}, \displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n-2}}}$$ is investigated computationally with theoretical discussions and examples. It is noted that all the parameters $a, b, c$ and the initial values $x_{-2}, x_{-1}$ and $x_0$ are all positive real numbers such that the denominator is always positive. Several periodic solutions with high periods of the RDEs as well as their inter-intra dynamical behaviours are studied.
Oscillation Criteria for Third Order Neutral Generalized Difference Equations with Distributed Delay
P.Venkata Mohan Reddy, M.Maria Susai Manuel, Adem Kilicman
Subject: Mathematics & Computer Science, Analysis Keywords: generalized difference operator; oscillation; non-oscillation; converge to zero; distributed delay; riccati transformation
This paper aims to investigate the criteria of behaviour of certain type of third order neutral generalized difference equations with distributed delay. With the technique of generalized Riccati transformation and Philos-type method, some oscillation criteria are obtained to ensure convergence and oscillatory solution of suitable example is listed to illustrate the main result.
Zeros and Value Sharing Results for q-Shifts Difference and Differential Polynomials
Subject: Mathematics & Computer Science, Analysis Keywords: Entire and Meromorphic function; q-shift; q-difference polynomial; shared values; Nevanlinna theory
In this paper, we consider the zero distributions of q-shift monomi-als and difference polynomials of meromorphic functions with zero order, that extends the classical Hayman results on the zeros of differential poly-nomials to q-shift difference polynomials. We also investigate problem of q-shift difference polynomials that share a common value.
Numerical Analysis of Flow Characteristics of Jeffery Nanofluid Past a Moving Plate in Conducting Field
Pentyala Srinivasa Rao, Baddela Hari Babu, S V K Varma
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Jeffery nanofluid; radiation; thermal diffusion; finite difference method; moving plate and porous medium
This paper reveals the physical properties of Jeffery nanofluid flow past a moving plate embedded in porous medium under the existence of radiation and thermal diffusion. The analysis is carried out in three cases of moving plate, namely stationary plate λ = 0, forth-moving plate λ = 1, back-moving plate λ = −1. Finite difference method is applied to solve the governing equations of the flow and pointed out the variations in velocity, temperature and concentration with the use of graphical presentations. The impact of several parameters on local skin friction, Nusselt number and Sherwood number is also noticed and discussed. Enhancement of velocity is observed under the impact of Jeffery parameter for the cases of stationary plate and back-moving plate, whereas reverse nature is found in the case of forth-moving plate. The velocity enhances as the values of porosity parameter increases for the case of stationary plate and forth-moving plate but a reverse nature is noticed in the case of back-moving plate.
Fault Diagnosis Method for Aircraft EHA based on FCNN and MSPSO Hyperparameter Optimization
Xudong Li, Yanjun Li, Yuyuan Cao, Shixuan Duan, Xingye Wang, Zejian Zhao
Subject: Engineering, Other Keywords: Electro Hydrostatic Actuator; Fusion Convolutional Neural Networks; Particle Swarm Optimization; Gram Angle Difference Field
Contrapose the highly integrated, multiple types of faults and complex working conditions of aircraft Electro Hydrostatic Actuator (EHA), to effectively identify its typical faults, we propose a fault diagnosis method based on the fusion convolutional neural networks (FCNN). First, the aircraft EHA fault data is encoded by GADF to obtain the fault feature images. Then we build an FCNN model that integrates the 1DCNN and 2DCNN, where the original 1D fault data is the input of the 1DCNN model, and the feature images obtained by GADF transformation are used as the input of 2DCNN. Multiple convolution and pooling operations are performed on each of these inputs to extract the features, next these feature vectors are spliced in the convergence layer, and the fully connected layers and the Softmax layers are finally used to attain the classification of aircraft EHA faults. Furthermore, the multi-strategy hybrid particle swarm optimization (MSPSO) algorithm is applied to optimize the FCNN to obtain a better combination of FCNN hyperparameters; MSPSO incorporates various strategies, including an initialization strategy based on homogenization and randomization, and an adaptive inertia weighting strategy, etc. The experimental result indicates that the FCNN model optimized by MSPSO achieves an accuracy of 96.86% for identifying typical faults of the aircraft EHA, respectively higher than the 1DCNN and the 2DCNN about 16.5% and 5.7%. Additionally, the FCNN model improved by MSPSO has a higher accuracy rate when compared to PSO.
A Nonlinear Radio-photon Conversion Device
Irina L. Vinogradova, Azat R. Gizatulin, Ivan K. Meshkov, Anton V. Bourdine, Manish Tiwari
Subject: Engineering, Electrical & Electronic Engineering Keywords: radio photonics; radio-over-fiber; orbital angular momentum; quadratic-nonlinear structure; difference frequency generation
The article analyzes existing materials and structures with quadratic-nonlinear optical properties that can be used to generate a difference frequency in the terahertz and sub-terahertz frequency ranges. The principle of constructing a nonlinear optical-radio converter, based on an optical focon (a focusing cone), is proposed. Based on the assumption that this focon can be implemented from the metal-organic framework (MOF), we propose a technique for modeling its parameters. The mathematical model of the process of propagation and nonlinear interaction of waves inside the focon is based on a simplification of the nonlinear wave equation. Within the framework of the developed model, the following parameters are approximately determined: the 3D gradient of the linear refractive index and the function determining the geometric profile of the focon, which provide a few-mode-based generation of the difference frequency.
Effect of Thermal Radiation on the Conjugate Heat Transfer from a Circular Cylinder with an Internal Heat Source in Laminar Flow
Subject: Engineering, Automotive Engineering Keywords: conjugate heat transfer; convection-radiation; Rosseland approximation; P1 approximation; finite difference; defect correction - multigrid.
The effect of thermal radiation on the two – dimensional, steady-state, conjugate heat transfer from a circular cylinder with an internal heat source in steady laminar crossflow is investigated in this work. P0 (Rosseland) and P1 approximations were used to model the radiative transfer. The mathematical model equations were solved numerically. Qualitatively, P0 and P1 approximations show the same effect of thermal radiation on conjugate heat transfer; the increase in the radiation – conduction parameter decreases the cylinder surface temperature and increases the heat transfer rate. Quantitatively, there are significant differences between the results provided by the two approximations.
Computational Analysis of Generalized Zeta Functions by Using Difference Equations
Asifa Tassaddiq
Subject: Keywords: computational analysis; difference equations; analytic number theory; generalized zeta function; plots; zeros; Taylor Series
In this article, author performs computational analysis for the generalized zeta functions by using computational software Mathematica. To achieve the purpose recently obtained difference equations are used. These difference equations have a computational power to compute these functions accurately while they can not be computed by using their known integral represenations. Several authors investigated such functions and their analytic properties, but no work has been reported to study the graphical representations and zeors of these functions. Author performs numerical computations to evaluate these functions for different values of the involved parameters. Taylor series expansions are also presented in this research.
Blood Flow Analysis in Tapered Stenosed Arteries with Influence of Heat and Mass Transfer
Yadong Liu, Wenjun Liu
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: blood flow, stenosed artery, K-L model, heat and mass transfer, finite difference scheme
A non-Newtonian fluid model is used to investigate the two-dimensional pulsatile blood flow through a tapered artery with a stenosis. The mixed convection effects of heat and mass transfer are also taken into account. By applying non-dimensionalization and radial coordinate transformation, we simplify the system in a tube. Under the finite difference scheme, numerical solutions are calculated for velocity, temperature concentration, resistance, impedance, wall shear stress and shearing stress at the stenosis throat. Finally, Quantitative analysis is carried out.
Modeling Membrane-Protein Interactions
Haleh Alimohamadi, Padmini Rangamani
Subject: Physical Sciences, Applied Physics Keywords: plasma membrane; spontaneous curvature; Helfrich energy; area difference elastic model; protein crowding; Deviatoric curvature
In order to alter and adjust the shape of the membrane, cells harness various mechanisms of curvature generation. Many of these curvature generation mechanisms rely on the interactions between peripheral membrane proteins, integral membrane proteins, and lipids in the bilayer membrane. One of the challenges in modeling these processes is identifying the suitable constitutive relationships that describe the membrane free energy that includes protein distribution and curvature generation capability. Here, we review some of the commonly used continuum elastic membrane models that have been developed for this purpose and discuss their applications. Finally, we address some fundamental challenges that future theoretical methods need to overcome in order to push the boundaries of current model applications.
Failure Characteristics Induced by Unloading Disturbance and Corresponding Mechanical Mechanism of the Coal Seam Floor in Deep Mining
Li Jiazhuo, Xie Guangxiang, Wang Lei
Subject: Engineering, General Engineering Keywords: deep mining; coal seam floor; unloading disturbance; space–time difference; stress shell; mechanical mechanism
Failure characteristics induced by unloading disturbance and the corresponding mechanical mechanism of the coal seam floor are important theoretical bases for water-bursting prevention from the floor of the coal seam and rock burst alarm in deep mining. However, the existing two-dimensional ground-pressure-control theory based on shallow mining cannot sufficiently guide deep-mining practices. In this study, the redistribution of mining-induced stress field in rocks surrounding the longwall face and mechanical behaviors of strata in deep mining are investigated through a combination of numerical simulation, physical simulation, and field measurement. Results demonstrate that mining-induced stress fields in the floor of the longwall face differ in space and time. Vertical stress unloading from top to bottom of the floor and horizontal stress unloading are relatively low. A concentration zone of high horizontal stress exists at stope boundaries. The critical yield load of rock stratum in the floor is determined through thin plate yield theory. Under the combined effect of concentrated high horizontal and vertical resilience stresses, strata in the floor fracture from seam to seam if the load increases to the minimum critical buckling value. Fractured strata slide along the fracture surface, which leads to floor heave. The stope floor shows evident time-delay progressive failure characteristics. The stress shell in the stope floor in deep mining is found to be a sensitive mechanical parameter that produces three-dimensional ground-pressure behavior in the floor. This ground-pressure behavior in the stope floor is controlled by the existence of the corresponding stress shell and effects induced by its space–time evolution. This study provides theoretical basis for the dynamic control of a hazard-inducing environment in engineering and minimizing or altering disaster-occurrence conditions during the construction engineering of the coal seam floor.
A CNN-Based Fusion Method for Feature Extraction from Sentinel Data
Giuseppe Scarpa, Massimiliano Gargiulo, Antonio Mazza, Raffaele Gaetano
Subject: Engineering, Electrical & Electronic Engineering Keywords: Coregistration; pansharpening; multi-sensor fusion; multitemporal images; deep learning; normalized difference vegetation index (NDVI)
Sensitivity to weather conditions, and specially to clouds, is a severe limiting factor to the use of optical remote sensing for Earth monitoring applications. A possible alternative, is to resort to weather-insensitive synthetic aperture radar (SAR) images. However, in many real-world applications, critical decisions are made based on some informative spectral features, such as water, vegetation or soil indices, which cannot be extracted from SAR images. In the absence of optical sources, these data must be estimated. The current practice is to perform linear interpolation between data available at temporally close time instants. In this work, we propose to estimate missing spectral features through data fusion and deep-learning. Several sources of information are taken into account - optical sequences, SAR sequences, DEM - so as to exploit both temporal and cross-sensor dependencies. Based on these data, and a tiny cloud-free fraction of the target image, a compact convolutional neural network (CNN) is trained to perform the desired estimation. To validate the proposed approach, we focus on the estimation of the normalized difference vegetation index (NDVI), using coupled Sentinel-1 and Sentinel-2 time-series acquired over an agricultural region of Burkina Faso from May to November 2016. Several fusion schemes are considered, causal and non-causal, single-sensor or joint-sensor, corresponding to different operating conditions. Experimental results are very promising, showing a significant gain over baselines methods according to all performance indicators.
A Generalized Measure of Cumulative Residual Entropy
Sudheesh Kumar Kattumannil, E. P. Sreedevi, N. Balakrishnan
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: cumulative entropy; cumulative residual entropy; extropy; gini mean difference; tsallis entropy; weighted cumulative residual entropy
In this work, we introduce a generalized measure of cumulative residual entropy and study its properties. We show that several existing measures of entropy such as cumulative residual entropy, weighted cumulative residual entropy and cumulative residual Tsallis entropy, are all special cases of the generalized cumulative residual entropy. We also propose a measure of generalized cumulative entropy, which includes cumulative entropy, weighted cumulative entropy and cumulative Tsallis entropy as special cases. We discuss generating function approach using which we derive different entropy measures. Finally, using the newly introduced entropy measures, we establish some relationships between entropy and extropy measures.
Color Glass by Layered Nitride Films for Building Integrated Photovoltaic (BIPV) System
Akpeko Gasonoo, Hyeon-Sik Ahn, Seongmin Lim, Jae-Hyun Lee, Yoonseuk Choi
Subject: Engineering, Automotive Engineering Keywords: BIPV; color glass; thin film interference; optical path-length difference; RF Sputtering; nitride; multilayer film
We investigated layered titanium nitride (TiN) and aluminum nitride (AlN) for color glasses in Building Integrated Photovoltaic (BIPV) systems. AlN and TiN are among suitable and cost-effective optical materials to be used as thin multilayer films, owing to the significant difference in their refractive index. To fabricate the structure, we used radio frequency magnetron deposition method to achieve the target thickness uniformly. A simple, fast, and cheap fabrication method is achieved by depositing the multilayer films in a single sputtering chamber. It is demonstrated that a multilayer stack that allows light to move from a low refractive index layer to a high refractive index layer or vice-versa can effectively create various distinct color reflections for different film thicknesses and multilayer structures. It is investigated from simulation based on wave optics that, TiN/AlN multilayer offers better color design freedom and cheaper fabrication process as compared to AlN/TiN multilayer films. Blue, green, and yellow color glasses with optical transmittance of more than 80% was achieved by ITO coated glass/TiN/AlN multilayer films. This technology exhibits good potential in commercial BIPV system applications.
Anthropogenic Factors Affecting the Vegetation Dynamics in the Arid Middle East
Iman Rousta, Haraldur Olafsson, Hao Zhang, Md Moniruzzaman, Jaromir Krzyszczak, Piotr Baranowski
Subject: Earth Sciences, Atmospheric Science Keywords: Middle East; Moderate Resolution Imaging Spectroradiometer; Normalized Difference Vegetation Index; time series analysis; governmental policy
The spatiotemporal variability of vegetation in the Middle East was investigated for the period 2001–2019 using the Moderate Resolution Imaging Spectroradiometer (MODIS) 16-day/500 m composites of the Normalized Difference Vegetation Index (NDVI; MOD13A1). The results reveal a strong increase in the NDVI coverage in the Middle East during the study period (R = 0.75, p-value = 0.05). In Egypt, the annual coverage exhibits the strongest positive trend (R = 0.99, p-value = 0.05). In Turkey, both the vegetation coverage and density increased from 2001 to 2019, which can be attributed to the construction of some of the biggest dams in the Middle East, such as the Atatürk and Ilisu dams. Significant increases in the annual coverage and maximum and average NDVI in Saudi Arabia are due to farming in the northern part of the country for which groundwater and desalinated seawater are used. The results of this study suggest that the main factors affecting the vegetation coverage in the Middle East are governmental policies. These policies can have a positive effect on the vegetation coverage in some countries such as Egypt, Saudi Arabia, Qatar, Kuwait, Iran, and Turkey.
Three-Dimensional Numerical Visualization And Simulation of Multiphysics in Point Source Dc Magnetometric Resistivity Method
Wenlong Gao, Lei Zhou, Liangjun Yan
Subject: Earth Sciences, Atmospheric Science Keywords: MMR; abnormal potential method; modified Biot-Savart law; finite difference method; three-dimensional visualization simulation
Magnetometric Resistivity(MMR) Method, which can use the power supply method of the traditional apparent resistivity method to measure the magnetic field. At present, the application research abroad is relatively extensive, but the domestic(China) research on the application of the MMR method is very few, and it is not even well-known. This paper is based on the MMR theoretical method under the point source DC condition, combined with the abnormal potential method and the modified Biot-Savart law, and using the three-dimensional numerical calculation method of the finite difference method to calculate the abnormal potential field, electric field, and magnetic field on the matlab2018a platform. Calculate, realize the multi-physics simulation in the electromagnetic field through Matlab platform programming, and verify the correctness of the algorithm by the spherical anomaly model with an analytical solution. Through the visual simulation of the three-dimensional data volume of the multi-physics field in the electromagnetic field, we can better understand the response mechanism of the electromagnetic field under DC conditions and grasp their three-dimensional spatial distribution rules. It is hoped that the research in this article can help the research of MMR Personnel better use this method for exploration.
Application of Geo-informatics Technology to Access the Surface Temperature Using LANDSAT 8 OLI/TIRS Satellite Data: A Case Study in Ampara District in Sri Lanka
Ibra Lebbe Mohamed Zahir
Subject: Social Sciences, Geography Keywords: land surface temperature; operational land imager; thermal infrared sensor; normalized difference vegetation Index; geospatial technology
Land Surface Temperature is a one of the key variable of Global climate changes and model which estimate radiating budget in heat balance as control of climate model. It is a major influenced factor by the ability of the surface emissivity. In this study, were used Landsat 8 satellite image that have Operational Land Imager and Thermal Infrared Sensor to calculate Land Surface Temperature through geospatial technology over Ampara district, Sri Lanka. The Land Surface Temperature was estimated with respect to Land Surface Emissivity and Normalized Difference Vegetation Index values determined from the Red and Near Infrared channels. Land Surface Emissivity was processed directly by the thermal Infrared bands. Pixels based calculation were used to effort at LANDSAT 8 images that thermal Band 10 various dates in this study. The results were achievable to compute Normalized Difference Vegetation Index, Land Surface Emissivity, and Land Surface Temperature with applicable manner to compare with land use/ land cover data. It determines and predicts the changes of surface temperature to favorable to decision making process for the society. Study area faces seasonal drought in Sri Lanka, the prediction method that how land can be efficiently used with the present condition. Therefore, the Land Surface Temperature estimation can prove whether new irrigation systems for agricultural activities or can transformed source of energy into useful form that introducing solar hubs for energy production in future.
Axial Diffusion of the Higher Order Scheme on the Numerical Simulation of Non-Steady Partial Differential Equation in the Human Pulmonary Capillaries
Azim Aminataei, Mohammadhossein Derakhshan
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: non-steady partial differential equation; higher order finite difference scheme; axial diffusion; convergence; consistency; stability
In the present study, a mathematical model of non-steady partial differential equation from the process of oxygen mass transport in the human pulmonary circulation is proposed. Mathematical modelling of this kind of problems lead to a non-steady partial differential equation and for its numerical simulation, we have used finite differences. The aim of the process is the exact numerical analysis of the study, wherein consistency, stability and convergence is proposed. The necessity of doing the process is that, we would like to increase the order of numerical solution to a higher order scheme. An increment in the order of numerical solution makes the numerical simulation more accurate, also makes the numerical simulation being more complicated. In addition, the process of numerical analysis of the study in this order of solution needs more research work.
Evaluation of Climate Change Impacts on Wetland Vegetation in Dunhuang Yangguan National Nature Reserve in Northwest China Using Landsat Derived NDVI
Feifei Pan, Jianping Xie, Juming Lin, Tingwei Zhao, Yongyuan Ji, Qi Hu, Xuebiao Pan, Cheng Wang, Xiaohuan Xi
Subject: Earth Sciences, Environmental Sciences Keywords: wetland vegetation; normalized difference vegetation index (NDVI); Landsat; precipitation; air temperature; snowmelt; extremely arid regions
Based on 541 Landsat images between 1988 and 2016, the normalized difference vegetation indices (NDVIs) of the wetland vegetation at Xitugou (XTG) and Wowachi (WWC) inside the Dunhuang Yangguan National Nature Reserve (YNNR) in northwest China were calculated for assessing impacts of climate change on wetland vegetation in the YNNR. It was found that the wetland vegetation at the XTG and WWC both had shown a significant increasing trend in the past 30 years, and the increase in both annual mean temperature and peak snow depth over the Altun Mountains led to the increase of wetland vegetation. The influence of local precipitation on the XTG wetland vegetation was greater than on the WWC wetland vegetation, which demonstrates that in extremely arid regions, the major constrain to the wetland vegetation is water availability in soils which is greatly related to the surface water detention and discharge of groundwater. At both XTG and WWC, snowmelt from the Altun Mountains is the main contributor to the groundwater discharge, while local precipitation plays a less role in influencing the wetland vegetation at the WWC than at the XTG, because the wetland vegetation grows on a relatively flat terrain at the WWC, while in a stream channel at the XTG.
Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays
Cheng Hu, Jingyang Wang, Weiming Tian, Tao Zeng, Rui Wang
Subject: Engineering, Electrical & Electronic Engineering Keywords: MIMO radar; MIMO imaging; Near-field imaging; Height difference between T/R arrays; Grating lobes
MIMO (multiple-input multiple-output) radar provides much more flexibility than the traditional radar for its ability to realize far more observation channels than the actual number of T/R (transmit and receive) elements. Designing the array of MIMO imaging radar, the commonly used virtual array theory generally assumes that all elements are placed on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, resulting in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation, leading to inevitable high grating lobes in the imaging result of near-field edge points of the scene observed by common MIMO array. To tackle these problems, this paper derives the relationship between target's PSF (point spread function) and pattern of T/R arrays, by which the design criterion of near-field imaging MIMO array is presented. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by simulations and an experiment.
Migratory Birds Monitoring of India's Largest Shallow Saline Ramsar Site with Big Geospatial Data Using Google Earth Engine for Restoration
Rajashree Naik, L.K. Sharma
Subject: Earth Sciences, Environmental Sciences Keywords: Inland saline wetland; lake; ecosystem; biodiversity; human interventions; Google Earth Engine; Normalized Difference Water Index; Restoration
Globally, saline lakes occupying 23% by area 44% by volume among all the lakes might desiccate by 2025 due to agricultural diversion, illegal encroachment, pollution, and invasive species. India's largest saline lake, Sambhar is currently shrinking at the rate of 4.23% due to illegal saltpan en-croachment. This research article aims to identify the trend of migratory birds and monthly wetland status. Birds survey was conducted for 2019, 2020 and 2021 and combined with literature data of 1994, 2003, and 2013 for visiting trend, feeding habit, migratory and resident ratio, and ecological diversity index analysis. Normalized Difference Water Index was scripted in Google Earth Engine. Results state that it has been suitable for 97 species. Highest NDWI values for the was whole study period was 0.71 in 2021 and lowest 0.008 in 2019 which is highly fluctuating. The decreasing trend of migratory birds coupled with decreasing water level indicates the dubious status for its existence. If the causal factors are not checked, it might completely desiccate by 2059 as per its future prediction. Certain steps are suggested that might help conservation. Least, the cost of restoration might exceed the revenue generation.
Relativistic Effects Appearing at Non-Relativistic Speeds
Kenji Kawashima
Subject: Physical Sciences, General & Theoretical Physics Keywords: hidden difference in relativistic energy; center of energy; time dilation; mechanical transverse wave; superposition of waves
We study the center of energy (CE) before and after the separation of superposed wave from a moving medium (MM). It is assumed that two out-of-phase mechanical transverse waves propagating from the opposite directions on a medium moving at non-relativistic speeds are superposed and the superposed portion (SP) is separated from the MM at that moment. We consider the CE of the SP before and after the separation from the MM. The location of CE (LCE) of the SP seems to be at the center of it at the moment of superposition. The SP rotates due to the separation from the MM since the velocity of each portion symmetric with respect to the center of the SP is equal in magnitude and opposite in direction. The magnitudes of velocities of the symmetric portions become different as soon as the SP begins to rotate with the separation from the MM. Then the energies of their symmetric portions are not the same, so the LCE of the SP is not at the center of it. As a result, the LCE of it looks different before and after the separation from the MM. We must find a solution to keep the LCE of the SP constant. We propose that two out-of-phase mechanical transverse waves (MTWs) propagating from the opposite directions on a MM originally have hidden difference in relativistic energy and it suddenly appears within the range observable in Newtonian mechanics when the SP starts rotating. This means that the point of view of hidden difference in relativistic energy is necessary to keep the LCE constant.
Property Analysis of Riccati Difference Equation for Load Frequency Controller of Time Delayed Power System Using IMMKF
M. Sumathy, M. Maria Susai Manuel, Adem Kılıçman, Jesintha Mary
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Riccati Difference equations; Power System Stability; Interacting Multiple Model Kalman Filter; Load frequency controller; Time Delays
In this paper, initially a mathematical model is formulated for transient frequency of power system considering time delays which occur while transmitting the control signals in open communication infrastructure. Time delay negligence in a power system leads to improper measurement of frequency variation in power system. The study of impact of time delays on the stability of power system is performed by estimating the decay rate of frequency wave form using Kalman Filter (KF). In power system, there is a possibility of multiple time delays. This paper also focusses on developing Interacting Multiple Model (IMM) Algorithm with multiple model space using Kalman Filter (KF) as state estimator tool. The multiple time delays in power system is considered as multiple model space. The result shows that KF provides better estimate of correct model for a particular input-set. The qualitative properties of Riccati difference equation (RDE) in terms of state error covariance of IMMKF are also analyzed and presented.
Synthesis of SiO2 Coated Ag-Cicada Wing as Large Scale Surface Enhanced Fluorescence Substrate
Siye Pan, Yanying Zhu, Guochao Shi, Zubin Shang, Mingli Wang
Subject: Physical Sciences, Optics Keywords: surface-enhanced fluorescence; quenching; Rhodamine 6G; hot spot; separation layer; high reproducibility; finite difference time domain
The surface enhanced fluorescence(SEF)detection bases by plasmonic nanopillars array with nanoparticles has opened up a new gate in the application of biological imaging and sensing. The fluorescence enhancement of the probe molecule depends on its position in equilibrium, which is close to the hot spot leading to the electromagnetic field enhancement, but not too close to the metal surface resulting in quenching. Here, a large scale SiO2-Ag-cicada wing SEF substrate was fabricated by magnetron sputtering with correction enhancement factor of 797.6. Thereinto the cicada wing provides the skeleton of the nanopillars array structure, the deposited Ag constructs two kinds of hot spots, and SiO2 forms a separation layer to prevent quenching. Moreover, the substrate exhibited good reproducibility, high sensitivity with low limits of detection (LOD) and high stability for oxidation resistance. We propose that SEF substrate with modification of SiO2 can not only improve the enhancement performance, but also expanding its application in the biological investigations.
Enhancement in Inverse Pyramid SERS Substrates with Entrapped Gold Nanoparticles
István Rigó, Miklos Veres, László Himics, Zsuzsanna Pápa, Orsolya Hakkel, P. Fürjes
Subject: Keywords: Surface-enhanced Raman spectroscopy (SERS); surface plasmons; Finite-Difference Time-Domain (FDTD) method; electromagnetic (EM) enhancement
Giant plasmonic surface enhancement has been observed in gold coated micron sized inverse pyramids entrapping a gold nanoparticle. The amplification of both surface enhanced Raman and photoluminescence signals was found to be dependent on the diameter of trapped gold nanoparticle and around 50-fold enhancement was detected for 250nm diameter sample relatively to the 50nm one. Finite differential time domain simulations, performed to determine the near-field distribution in the structure, showed that when the nanoparticle protrudes into the hotspot zone of the void, coupling of electromagnetic field occurs and the plasmon-related near-field enhancement is concentrated into the close vicinity of the nanoparticle, mainly into the close gaps around the tangential points of the curved sphere and the flat pyramid surface. This results in a more than 15 times increase of the near-field intensity, compared to the empty void.
Using Difference Scheme Method to the Fractional Telegraph Model with Atangana-Baleanu-Caputo Derivative
Mahmut Modanli
Subject: Keywords: fractional telegraph model with Atangana-Baleanu derivative; Laplace method; stability inequalites; difference schemes; implicit finite method
The fractional telegraph partial differential equation with fractional Atangana-Baleanu-Caputo (ABC) derivative is studied. Laplace method is used to find the exact solution of this equation. Stability inequalities are proved for the exact solution. Difference schemes for the implicit finite method are constructed. The implicit finite method is used to deal with modelling the fractional telegraph differential equation defined by Caputo fractional of Atangana-Baleanu (AB) derivative for different interval. Stability of difference schemes for this problem is proved by the matrix method. Numerical results with respect to the exact solution confirm the accuracy and effectiveness of the proposed method.
Performance Assessment of Newly Developed Seaweed Enhancing Index
Muhammad Danish Siddiqui, Arjumand Z. Zaidi, Muhammad Abdullah
Subject: Earth Sciences, Geoinformatics Keywords: floating algae index (FAI); normalized difference vegetation index (NDVI); remote sensing; seaweed enhancing index (SEI); seaweed
Seaweeds are regarded as one of the valuable coastal resources because of their usage in human food, cosmetics, and other industrial items. They also play a significant role in providing nourishment, shelter, and breeding grounds for fish and many other sea species. This study introduces a newly developed seaweed enhancing index (SEI) using spectral bands of near-infrared (NIR) and shortwave infrared (SWIR) of Landsat 8 satellite data. The seaweed patches in the coastal waters of Karachi, Pakistan were mapped using SEI, and its performance was compared with other commonly used indices - Normalized Difference Vegetation Index (NDVI) and Floating Algae Index (FAI). The accuracy of the mapping results obtained from SEI, NDVI, and FAI was checked with field verified seaweed locations. The purpose of the field surveys was to validate the results of this study and to evaluate the performance of SEI with NDVI and FAI. The performance of SEI was found better than NDVI and FAI in enhancing submerged patches of the seaweed pixels what other indices failed to do.
Hua-Yu Li, Hong-Rui Li
Subject: Engineering, Energy & Fuel Technology Keywords: heating cycles; thermodynamic cycles; thermodynamics; temperature difference utilization; heating; cold energy utilization; sustainable energy; cogeneration; thermal science
Considering the significance of thermodynamic cycles in the global energy system, it is necessary to develop new general classes of thermodynamic cycles to relieve current energy and environmental problems. Inspired by the relationship between power cycles and refrigeration cycles, we realize that general classes of thermodynamic cycles should occur in pairs with opposite functions. Here we reverse class 1 heating cycles to obtain another new general class of thermodynamic cycles named class 2 heating cycles (HC-2s). HC-2s have two basic forms, and each contains six thermodynamic processes. HC-2s present the simplest and most general approach to utilizing the temperature difference between a medium-temperature heat source and a low-temperature heat sink to achieve efficient high-temperature heating. HC-2s fill the gaps that have existed since the origin of thermal science, and they will play significant roles in the global sustainable energy system.
The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems
Edward Kamen
Subject: Engineering, Electrical & Electronic Engineering Keywords: z-transform; time-varying systems; time-varying difference equations; skew polynomial rings; extended Euclidean algorithm; fraction decomposition
A transform approach based on a variable initial time (VIT) formulation is developed for discrete-time signals and linear time-varying discrete-time systems or digital filters. The VIT transform is a formal power series in z^(-1) which converts functions given by linear time-varying difference equations into left polynomial fractions with variable coefficients, and with initial conditions incorporated into the framework. It is shown that the transform satisfies a number of properties that are analogous to those of the ordinary z-transform, and that it is possible to do scaling of z^(- i) by time functions which results in left-fraction forms for the transform of a large class of functions including sinusoids with general time-varying amplitudes and frequencies. Using the extended right Euclidean algorithm in a skew polynomial ring with time-varying coefficients, it is shown that a sum of left polynomial fractions can be written as a single fraction, which results in linear time-varying recursions for the inverse transform of the combined fraction. The extraction of a first-order term from a given polynomial fraction is carried out in terms of the evaluation of z^(i) at time functions. In the application to linear time-varying systems, it is proved that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. For systems given by a time-varying moving average or an autoregressive model, the transform framework is used to determine the steady-state output response resulting from various signal inputs such as the step and cosine functions.
An Exploration of a Balanced Up-downwind Scheme for Solving Heston Volatility Model Equations on Variable Grids
Chong Sun, Qin Sheng
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Heston volatility model; initial-boundary value problems; finite difference approximations; up-downwind scheme; order of convergence; stability
This paper studies an effective finite difference scheme for solving two-dimensional Heston stochastic volatility option pricing model problems. A dynamically balanced up-downwind strategy for approximating the cross-derivative is implemented and analyzed. Semi-discretized and spatially nonuniform platforms are utilized. The numerical method comprised is simple, straightforward with reliable first order overall approximations. The spectral norm is used throughout the investigation and numerical stability is proven. Simulation experiments are given to illustrate our results.
Series Representation of Power Function
Petro Kolosov
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: series representation; power function; monomial; binomial theorem; multinomial theorem; worpitzky identity; stirling numbers of second kind; faulhaber's sum; finite difference; faulhaber's formula; central factorial numbers; binomial coefficients; binomial distribution; binomial transform; bernoulli numbers; oeis; multinomial coefficients
In this paper we discuss a problem of generalization of binomial distributed triangle, that is sequence A287326 in OEIS. The main property of A287326 that it returns a perfect cube n as sum of n-th row terms over k; 0 ≤ k ≤ n−1 or 1 ≤ k ≤ n, by means of its symmetry. In this paper we have derived a similar triangles in order to receive powers m = 5; 7 as row items sum and generalized obtained results in order to receive every odd-powered monomial n2m+1; m ≥ 0 as sum of row terms of corresponding triangle. In other words, in this manuscript are found and discussed the polynomials Dm(n,k) and Um(n,k), such that, when being summed up over k in some range with respect to m and n returns the monomial n2m+1.
A Lagrangian Approach for Computational Acoustics with Meshfree Method
Yong Ou Zhang, Stefan G. Llewellyn Smith, Tao Zhang, Tian Yun Li
Subject: Physical Sciences, Acoustics Keywords: Lagrangian approach; Lagrangian acoustic perturbation equations; computational acoustics; meshfree method; smoothed particle hydrodynamics; generalized finite difference method
Although Eulerian approaches are standard in computational acoustics, they are less effective for certain classes of problems like bubble acoustics and combustion noise. A different approach for solving acoustic problems is to compute with individual particles following particle motion. In this paper, a Lagrangian approach to model sound propagation in moving fluid is presented and implemented numerically, using three meshfree methods to solve the Lagrangian acoustic perturbation equations (LAPE) in the time domain. The LAPE split the fluid dynamic equations into a set of hydrodynamic equations for the motion of fluid particles and perturbation equations for the acoustic quantities corresponding to each fluid particle. Then, three meshfree methods, the smoothed particle hydrodynamics (SPH) method, the corrective smoothed particle (CSP) method, and the generalized finite difference (GFD) method, are introduced to solve the LAPE and the linearized LAPE (LLAPE). The SPH and CSP methods are widely used meshfree methods, while the GFD method based on the Taylor series expansion can be easily extended to higher orders. Applications to modeling sound propagation in steady or unsteady fluids in motion are outlined, treating a number of different cases in one and two space dimensions. A comparison of the LAPE and the LLAPE using the three meshfree methods is also presented. The Lagrangian approach shows good agreement with exact solutions. The comparison indicates that the CSP and GFD method exhibit convergence in cases with different background flow. The GFD method is more accurate, while the CSP method can handle higher Courant numbers.
Vibration Analysis of Axially Functionally Graded Non-Prismatic Timoshenko Beams Using the Finite Difference Method
Valentin Fogang
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Timoshenko beam; finite difference method; additional points; vibration analysis; direct time integration method
This paper presents an approach to the vibration analysis of axially functionally graded non-prismatic Timoshenko beams (AFGNPTB) using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, shear moduli, and mass density) of axially functionally graded beams vary along the longitudinal axis. The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam's ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFGNPTB was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFGNPTB, considering the damping. The results obtained in this study showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
Does Green Innovation Promote New Urbanization development? From the Perspective of Coupling Coordination between Green Innovation and New Urbanization
Weixiang Xu, Lindong Ma, Yuanxiao Hong, Xiaoyong Quan
Subject: Social Sciences, Accounting Keywords: green innovation; new urbanization; coupling model; coupling coordination degree; temporal and spatial difference; Yangtze River Delta City Group
Green innovation has become the mainstream of the era, and New Urbanization is an inevitable choice in the process of urbanization in China. Focusing on the topics of green innovation and new urbanization, much work has been done to find their factors respectively while the relationship between the two remains to be explored. Hence, in this article, representative indicators of new urbanization and green innovation are selected to study the Yangtze River Delta City Group from the perspective of both the entire urban agglomeration and a single city, in terms of time and space, using the entropy method and the coupling model. The results show that (1). Green innovation promotes the new urbanization development and there is a synergistic effect between the two systems. (2). The level of economic development is an important factor that affects the degree of coupling degree and coordination degree between the two interactions, and its influence is better than the spatial effect. (3). Green innovation and new urbanization have positive spatial autocorrelation and regional agglomeration (there are High-High, Low-Low, and High-Low collections).
Vibration Analysis of Axially Functionally Graded Non-Prismatic Euler-Bernoulli Beams Using the Finite Difference Method
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Euler-Bernoulli beam; finite difference method; additional points; vibration analysis; direct time integration method
This paper presents an approach to the vibration analysis of axially functionally graded (AFG) non-prismatic Euler-Bernoulli beams using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, and mass density) of AFG beams vary along the longitudinal axis. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam's ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFG non-prismatic Euler-Bernoulli beams was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFG non-prismatic Euler-Bernoulli beams, considering the damping. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
Generation and Mitigation of Conducted Electromagnetic Interference in Power Converters – A Big Picture
Arnold de Beer
Subject: Engineering, Electrical & Electronic Engineering Keywords: Power Converters; Power Electronics; Electromagnetic Interference; EMI; Noise; Differential Mode; DM; Common Mode; CM; Imbalance Difference Model; Boost Converter
This article is a big picture of how electrical noise or conducted Electromagnetic Interference (EMI) is generated and mitigated in power converters. It gives an overview of what EMI in power converters is – from generation through to conduction and mitigation. It is meant to cover the complete subject as a summary so that the reader will have an outline of how to control conducted EMI by design (where possible) and how to mitigate by filtering. A clear distinction is made between Differential Mode (DM) and Common Mode (CM) EMI generation and mitigation. By using a boost converter as an example the trade-offs for DM noise control are discussed. It is shown how CM EMI is generated in a boost converter using the concept of the "Imbalance Difference Model" (IDM). Practical measurements for an in-line power filter is given showing the effect of the filter on the total EMI of a boost converter. Measurements for the CM current produced due to the imbalance difference for different values of the boost conductor are also shown. The main contribution of this study is linking CM noise generation to DM EMI. It is shown that CM noise is a direct consequence of DM noise (although circuit imbalance and coupling to a common ground also play a role). This paper will be useful to designers seeking the "bigger picture" of how EMI is generated in power converters and what can be done to mitigate the noise.
A Non-Standard Finite Difference Scheme for Magneto-Hydro Dynamics Boundary Layer Flows of an Incompressible Fluid Past a Flat Plate
Riccardo Fazio, Alessandra Jannelli
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: MHD model problem; boundary problem on semi-infinite interval; non-standard finite difference scheme; quasi-uniform mesh; error estimation
This paper deals with a non-standard finite difference scheme defined on a quasi-uniform mesh for approximate solutions of the Magneto-Hydro Dynamics (MHD) boundary layer flow of an incompressible fluid past a flat plate for a wide range of the magnetic parameter. We show how to improve the obtained numerical results via a mesh refinement and a Richardson extrapolation. The obtained numerical results are favourably compared with those available in the literature.
The Effects of Visual Cues, Blindfold, Synesthetic Experience and Music Training on Pure-Tone Frequency Discrimination
Cho Kwan Tse, Calvin Kai-Ching Yu
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: frequency difference limens; blindfold; visual cues; auditory-visual synesthesia; gliding frequencies; perceptual limit, common resource theory; multiple resource model
How perceptual limits can be overcome has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience and music training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. It was hoped that the auditory limits could be overcome through visual facilitation, visual deprivation, involuntary cross-modal sensory experience or music practice. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfold/visual cue) and one-within (control/experimental session) designed study. A MATLAB program was prepared to test their FDL by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire & Projector-Associator Test) were used to assess their tendency to synesthesia. Participants with music training showed a significantly smaller FDL; on the other hand, being blindfolded, being provided with visual cues or having synesthetic experience before could not significantly reduce the FDL. However, the result showed a trend of reduced FDLs through blindfolding. This indicated that visual deprivation might slightly expand the limits in auditory perception. Overall, current study suggests that the inter-sensory perception can be enhanced through training but not though reallocating cognitive resources to certain modalities. Future studies are recommended to verify the effects of music practice on other perceptual limits.
Multimodal Ligand Binding Studies of Human and Mouse G-Coupled Taste Receptors to Correlate with their Species-Specific Sweetness Properties
Fariba M. Assadi-Porter, James Radek, Hongyo Rao, Marco Tonelli
Subject: Biology, Physiology Keywords: Heterodimeric G protein coupled receptor; saturation transfer difference nuclear magnetic resonance spectroscopy; differential scanning calorimetry; circular dichroism; intrinsic fluorescence spectroscopy
Taste signaling is a complex process that is linked to obesity and its associated metabolic syndromes. The sweet taste is mediated through a heterodimeric G protein coupled receptor (GPRC) in a species-specific manner and at multi-tissue specific levels. The sweet receptor recognizes a large number of ligands with structural and functional diversities to modulate different amplitudes of downstream signaling pathway(s). The human sweet-taste receptor has been extremely difficult to study by biophysical methods due to inadequate methods for producing large homogeneous quantities of the taste-receptor protein and a lack of reliable in vitro assays to precisely measure productive ligand binding modes leading to activity upon their interactions with the receptor protein. We report a multimodal high throughput assays to monitor ligand binding, receptor stability and conformational changes to model the molecular interactions between ligand-receptor. We applied saturation transfer difference nuclear magnetic resonance spectroscopy (STD-NMR) complemented by differential scanning calorimetry (DSC), circular dichroism (CD) spectroscopy, and intrinsic fluorescence spectroscopy (IF) to characterize binding interactions. Our method using complementary NMR and biophysical analysis is advantageous to study the mechanism of ligand binding and signaling processes in other GPCRs.
Cross-sectional Analysis of Beams Subjected to Saint-Venant Torsion Using the Green's Theorem and the Finite Difference Method
Subject: Engineering, General Engineering Keywords: Theory of elasticity; Saint-Venant torsion; Green's theorem; finite difference method; additional nodes; thin-walled sections; stress concentration at reentrant corners; multiply connected cross-section; warping displacement
Online: 1 June 2022 (11:01:06 CEST)
This paper presents an approach to the elastic analysis of beams subjected to Saint-Venant torsion using Green's theorem and the finite difference method (FDM). The Saint-Venant torsion of beams, also called free torsion or unrestrained torsion, is characterized by the absence of axial stresses due to torsion; only shear stresses are developed. A solution to this torsion problem consists of finding a stress function that satisfies the governing equation and the boundary conditions. The FDM is an approximate method for solving problems described with differential equations; it does not involve solving differential equations, equations are formulated with values at selected nodes of the structure. In this paper, the beam's cross-section was discretized using a two-dimensional grid and additional nodes were introduced on the boundaries. The introduction of additional nodes allowed us to apply the governing equations at boundary nodes and satisfy the boundary conditions. Beams with solid sections as well as multiply connected cross-sections were analyzed using this model; shear stresses and localized stresses at reentrant corners, torsion constant, and warping displacements were determined. Furthermore, beams with thin-walled closed sections, single-cell or multiple-cell, were analyzed using the Prandtl stress function whereby a linear distribution of the shear stresses over the thickness was considered; closed-form solutions for shear stresses and torsion constant were derived. The results obtained in this study showed good agreement with the exact results for rectangular cross-sections, and the accuracy was increased through a grid refinement. For thin-walled closed sections, the shear stresses obtained at the centerline using the closed-form solutions were in agreement with the values using Bredt's analysis but the maximal values in the cross-section, which did not necessarily occur at the position with the smallest thickness, were higher; in addition, the results using the closed-form solutions were in good agreement with those using FDM.
Timoshenko Beam Theory: First-Order Analysis, Second-Order Analysis, Stability, and Vibration Analysis Using the Finite Difference Method
Subject: Engineering, Civil Engineering Keywords: Timoshenko beam; finite difference method; additional points; element stiffness matrix; tapered beam; second-order analysis; vibration analysis; direct time integration method
This paper presents an approach to the Timoshenko beam theory (TBT) using the finite difference method (FDM). The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. The model developed in this paper consisted of formulating differential equations with finite differences and introducing additional points at the beam's ends and at positions of discontinuity (concentrated loads or moments, supports, hinges, springs, brutal change of stiffness, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends. Moreover, grid points with variable spacing were considered, the grid being uniform within beam segments. First-order, second-order, and vibration analyses of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis, vibration analysis). Finally, a direct time integration method (DTIM) was presented; the FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid.
A Comparative Analysis of Image Denoising Problem: Noise Models, Denoising Filters and Applications
Subrato Bharati, Tanvir Zaman Khan, Prajoy Podder, Nguyen Quoc Hung
Subject: Mathematics & Computer Science, Other Keywords: Gaussian noise; Speckle Noise; Mean square error(MSE); DE noising filters; Maximum difference value (MD); Peak signal to noise ratio(PSNR)
Noise reduction in medical images is a perplexing undertaking for the researchers in digital image processing. Noise generates maximum critical disturbances as well as touches the medical images quality, ultrasound images in the field of biomedical imaging. The image is normally considered as gathering of data and existence of noises degradation the image quality. It ought to be vital to reestablish the original image noises for accomplishing maximum data from images. Medical images are debased through noise through its transmission and procurement. Image with noise reduce the image contrast and resolution, thereby decreasing the diagnostic values of the medical image. This paper mainly focuses on Gaussian noise, Pepper noise, Uniform noise, Salt and Speckle noise. Different filtering techniques can be adapted for noise declining to improve the visual quality as well as reorganization of images. Here four types of noises have been undertaken and applied on medical images. Besides numerous filtering methods like Gaussian, median, mean and Weiner applied for noise reduction as well as estimate the performance of filter through the parameters like mean square error (MSE), peak signal to noise ratio (PSNR), Average difference value (AD) and Maximum difference value (MD) to diminish the noises without corrupting the medical image data.
Complete monotonicity of a difference constituted by four derivatives of a function involving trigamma function
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: complete monotonicity; necessary and sufficient condition; difference; derivative; trigamma function; convolution theorem for the Laplace transforms; Bernstein's theorem for completely monotonic functions
In the paper, by virtue of convolution theorem for the Laplace transforms, Bernstein's theorem for completely monotonic functions, and other techniques, the author finds necessary and sufficient conditions for a difference constituted by four derivatives of a function involving trigamma function to be completely monotonic.
3D Scattering Imaging of Multiscale Geological Media on the Base of Revised Version of Exploding Reflectors
Evgeny Landa, Galina Reshetova, Vladimir Tcheverda
Subject: Earth Sciences, Geophysics Keywords: Common Middle Point; Propagator; Spatial Reflector; small-scale heterogeneities; diffraction/scattering imaging; finite-difference simulation; local grid refinement in time and space.
Computation of Common Middle Point seismic sections and their subsequent time migration and diffraction imaging provides very important knowledge about the internal structure of 3D heterogeneous geological media and are key elements for successive geological interpretation. Full-scale numerical simulation, that computes all single shot seismograms, provides a full understanding of how the features of the image reflect the properties of the subsurface prototype. Unfortunately, this kind of simulations of 3D seismic surveys for realistic geological media needs huge computer resources, especially for simulation of seismic waves' propagation through multiscale media like cavernous fractured reservoirs. In order to significantly reduce the query of computer resources we propose to model these 3D seismic cubes directly rather than the shot-by-shot simulation with subsequent CMP stacking. To do that we modify the well known "exploding reflectors principle" for 3D heterogeneous multiscale media by use of the finite-difference technique on the base of grids locally refined in time and space. To be able to simulate realistic models and acquisition we develop scalable parallel software, which needs reasonable computational costs. Numerical results for simulation of Common Middle Points sections and their time migration are presented and discussed.
Euler-Bernoulli Beam Theory: First-Order Analysis, Second-Order Analysis, Stability, and Vibration Analysis Using the Finite Difference Method
Subject: Keywords: Euler Bernoulli beam; finite difference method; additional points; element stiffness matrix; tapered beam; first-order analysis; second-order analysis; vibration analysis; direct time integration method
This paper presents an approach to the Euler-Bernoulli beam theory (EBBT) using the finite difference method (FDM). The EBBT covers the case of small deflections, and shear deformations are not considered. The FDM is an approximate method for solving problems described with differential equations. The FDM does not involve solving differential equations; equations are formulated with values at selected points of the structure. Generally, the finite difference approximations are derived based on fourth-order polynomial hypothesis (FOPH) and second-order polynomial hypothesis (SOPH) for the deflection curve; the FOPH is made for the fourth and third derivative of the deflection curve while the SOPH is made for its second and first derivative. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, the FOPH was made for all of the derivatives of the deflection curve, and additional points were introduced at the beam's ends and positions of discontinuity (concentrated loads or moments, supports, hinges, springs, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. First-order analysis, second-order analysis, and vibration analysis of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis). Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid. Further research will be needed to investigate polynomial refinements (higher-order polynomials such as fifth-order, sixth-order…) of the deflection curve; the polynomial refinements aimed to increase the accuracy, whereby non-centered finite difference approximations at beam's ends and positions of discontinuity would be used.
Alcohol Dependence Induces CRF Sensitivity in Female Central Amygdala GABA Synapses
Larry Rodriguez, Dean Kirson, Sarah A. Wolfe, Reesha R. Patel, Florence P. Varodayan, Angela E. Snyder, Pauravi J. Gandhi, Sophia Khom, Roman Vlkolinksy, Michal Bajo, Marisa Roberto
Subject: Life Sciences, Cell & Developmental Biology Keywords: corticotropin releasing factor (CRF); patch-clamp electrophysiology; sex difference; alcohol use disorder (AUD); Gamma-Aminobutyric Acid (GABA); central amygdala (CeA); spontaneous inhibitory post synaptic currents (sIPSCs)
Alcohol use disorder (AUD) is a chronically relapsing disease characterized by loss of control in seeking and consuming alcohol (ethanol) driven by recruitment of brain stress systems. However, AUD differs among the sexes: men are more likely to develop AUD, but women progress from casual to binge drinking and heavy alcohol use more quickly. The central amygdala (CeA) is a hub of stress and anxiety, with corticotropin releasing factor (CRF)-CRF1 receptor and GABAergic signaling dysregulation occurring in alcohol dependent male rodents. However, we recently showed that GABAergic synapses in female rats are less sensitive to the acute effects of ethanol. Here, we used patch clamp electrophysiology to examine the effects of alcohol dependence on the CRF-modulation of rat CeA GABAergic transmission of both sexes. We found that GABAergic synapses of naïve female rats were unresponsive to CRF application compared males, although alcohol dependence induced a similar CRF responsivity in both sexes. In situ hybridization revealed that females had less CeA neurons containing mRNA for the CRF1 receptor (Crhr1) than males, but in dependence, the percentage of Crhr1-expressing neurons in females increased, unlike males. Overall, our data provide evidence for sexually dimorphic CeA CRF system effects on GABAergic synapses in dependence.
Two-Dimensional Stress Analysis of Isotropic Deep Beams Using the Finite Difference Method
Subject: Engineering, Civil Engineering Keywords: Finite difference method; additional nodes; Airy stress function; displacement potential function; deep beam of varying thickness; layered beam; deep beam having openings; skew edge; buckling analysis
This paper presents an approach to the two-dimensional analysis of elastic isotropic deep beams using the finite difference method (FDM). Deep beams are subjected to in-plane loading and present a shear span to height ratio of less than 2.50; consequently, Euler-Bernoulli beam theory and Timoshenko beam theory do not apply. Deep beams analysis is generally conducted using numerical methods such as the finite element method and to a lesser extent the FDM; the strut-and-tie model and the stress field method are also widely utilized. Analytical approaches usually make use of the Airy stress function, where stresses are formulated in terms of the stress function; however, the exact solution of this function satisfying all of the boundary conditions can hardly be found, even for simple cases. In this paper, deep beams were analyzed using the FDM. The FDM is an approximate method for solving problems described with differential equations. The FDM does not involve solving differential equations; equations are formulated with values at selected nodes of the structure. Therefore, the deep beam was discretized with a two-dimensional grid, and additional nodes were introduced at the boundaries and at positions of discontinuity (openings, brutal change of material properties, non-uniform grid spacing), the number of additional nodes corresponding to the number of boundary conditions at the node of interest. The introduction of additional nodes allowed us to apply the governing equations at boundary nodes and satisfy the boundary and continuity conditions. An Airy stress function approach and a displacement potential function approach were considered in this study whereby strong formulations of equations (equilibrium, kinematic, and constitutive) were set. Stress and stability analyses were carried out with this model; furthermore, deep beams of varying stiffness, layered beams, and beams having openings were analyzed. For slender beams, the results obtained with the Airy stress function approach showed good agreement with those of the Euler-Bernoulli beam theory, and for deep beams, the shapes of stress distributions were in good agreement with a proper understanding of the behavior of structures. On the other hand, the displacement potential function approach delivered unsatisfactory results, probably due to the use of an inefficient equation solver; a more powerful tool will be needed in future research for this purpose. | CommonCrawl |
Vegetable diversification in cocoa-based farming systems Ghana
Justice G. Djokoto ORCID: orcid.org/0000-0002-2159-29441,
Victor Afari-Sefa2 &
Albert Addo-Quaye3
Agriculture & Food Security volume 6, Article number: 6 (2017) Cite this article
As part of dynamic livelihood coping strategies, some farmers in Ghana's cocoa belt have diversified away from traditional cocoa production to other high-value crops including vegetables, to the extent of diversifying within vegetables. This study assessed the extent of diversification of vegetables among farmers in Ghana's cocoa belt and determined the factors that explain the variability in the diversification indices. A small-sample-size formula (http://www.surveysystem.com/sscalc.htm) that was based on an estimated population of the sample was used to arrive at 621 farmer respondents from the Ashanti and Western Regions of Ghana. A combination of proportional and random sampling was employed to select farmers for the interview.
Marital status of the household head and total land endowment were the major determinants of diversification.
Unlike most other studies found in the crop diversification literature, this study used econometric data reduction procedures to select the appropriate diversification indices, and selected the most appropriate fractional regression functional form from the four modelled. Vegetable diversification offers great potential for improving livelihoods of cocoa-based farm households in the study area.
Vegetable production can enhance income of smallholder producers through their high farmgate values per unit land area and generate employment in rural areas [5, 24, 33, 44]. Vegetables can also make important contributions to food and nutritional security as they contain essential micronutrients and confer other essential health benefits. Aside of these, traditional African vegetables such as Amaranthus spp. in particular are considered very valuable because of their comparatively higher micronutrient content in comparison with exotic vegetables and their ability to fit into year-round production systems. Vegetables thus play an important socio-economic role as well as in diversifying diets for improved nutrition [29]. In Ghana, export of vegetables such as okra to the European Union generates considerable foreign exchange [3, 19].
Some studies have pointed to diversification of dominant farm production systems with other commodities such as vegetables in developing countries [1, 17, 23, 24, 43]. In agriculture, diversification may be viewed as a three-stage process [8]. The first stage is considered at the cropping level which involves a shift away from monoculture. At the second stage, farm households have more than one enterprise and produce many crops that they could potentially sell at different times of the year. The final stage is mostly referred to as mixed farming where there is a shift of production resources from one crop (or livestock) to a larger mix of crops (or livestock) or mix of crops and livestock. Within this context, vegetable diversification is a sub-type of stage two, in which diversification is within one group of crops, in this case vegetables.
Overall, diversification is a significant factor explaining differences in the level and variability of farm income between higher and lower performing small farms [35, 39]. The benefits of crop diversification are threefold: economic, social and agronomic. The economic benefits include: seasonal stabilisation of farm income to meet other basic household livelihood needs such as children's education; household subsistence, food and nutrition needs; and a reduction of risk of overall farm returns by selecting a mixture of activities whose net returns have a low or negative correlation whilst lessening price fluctuations [21, 40]. One social benefit is the seasonal employment for casual farm workers, whilst agronomic benefits include conserving precious soil and water resources, reduced disease and pest incidence, reduced soil erosion and improved soil fertility alternatives as well as options for increasing plant nutrition and crop yields [2, 7, 9, 18].
Cocoa (Theobroma cacao) is grown in most parts of the humid tropics agroclimatic zone of several West Africa countries, particularly, Cameroon, Cote D'Ivoire, Ghana, Liberia and Nigeria on account of its endowed comparative advantage. In Ghana, the bulk of cocoa, the country's main agricultural export emanates from the Western and Ashanti regions in the humid tropics zone. However, owing to the diverse merits of diversification enumerated above, some farmers have diversified away from cocoa to other crops including vegetables. Others have gone beyond this to diversify within vegetables, producing different vegetables on the same plot of land or different plots of land. The main vegetables in contention are tomato (Lycopersicon esculentum), hot pepper (Capsicum annum), African eggplant (Solanum aethiopicum, S. anguivi and S. macrocarpon) and okra (Abelmoschus esculentus). The less popular ones are cabbage (Brassica oleracia var capitata), cucumber (Cucumis sativa) and carrot (Daucus carota). This study seeks to assess the extent of diversification of vegetables among farmers in Ghana's cocoa belt and identify the factors that account for the variability in the diversification index [13].
Although some studies have investigated diversified production of vegetables in some developing countries [17, 23, 24, 43], Ali [1] seemed to be the first study to have explicitly addressed the socio-economic determinants of vegetable diversification in India. The study as could be expected used the Simpson's diversification index that was modelled using a logistic regression a priori. Two limitations are likely to have emerged from this study. First, other diversification indices seemed not to have been considered for superlative analysis. Second, the data generation process (DGP) of the Simpson's diversification index is fractional therefore the logistic regression as used is perhaps not very appropriate for obtaining robust estimates. The present study estimated various diversification indices including the Simpson's index and selected the best-bet alternative for the study based on statistical procedures. Different functional forms of the fractional regression were estimated, and the most appropriate selected based on a battery of tests. The data used were drawn from vegetable farmers in the cocoa belt of the Western and Ashanti Regions of Ghana.
An investor would typically invest stocks or unit(s) of investment to maximise returns. If the investor knew the extent of future returns with certainty, he/she would invest in only one security out of the lot, namely the one with the highest future return. If several investment units had the same, highest, future return, then the investor would be indifferent between any of these, or any combination of these [27]. For this reason, the investor will not diversify the combinations or portfolio of investment units. Certainly, the future returns of all investment units are unknown. Therefore, to reduce uncertainty, the investor diversifies by picking up a lot more of other investment units. The underlying motive is to ensure that whilst some units do not generate the expected return on investment, others will. The vegetable farmer may be considered as an investor with his/her vegetable produce/crops being regarded as investment units. Malton and Fafchamps [26] noted that crop diversification is a risk-minimising strategy to the extent that individual crop yields are not closely correlated with diverse weather conditions, pests and disease attack. See [4, 15, 25, 28] for a comprehensive review of the theoretical literature on multiple cropping systems and crop diversification. Diversification certainly is motivated by uncertainty for the vegetable farmer: climate change, prices and other factors. Vegetable farmers ultimately seek not only the expected income but food and nutritional security as well [22, 29, 44].
As noted earlier and to the best of our knowledge, Ali [1] is the only study that specifically addressed the factors determining vegetable diversification. Indeed, in the diversification of non-vegetable crops the situation is not different; Shaxon and Tauer [41] seemed to be the only relevant study. These two are briefly discussed. Ali [1] analysed the factors affecting adoption of crop diversification as a risk management strategy in vegetable production with data collected from 556 farmers drawn from eight districts in Uttar Pradesh, India. The mean age of farmer respondents for this study was 40.33 years. The highest category of educational level was secondary with higher secondary constituting about 33.0% of the total sample. The average land area was 1.75 ha (4.38 acres). About 80% of vegetable growers adopted crop diversification with a mean Simpson diversification index (SDI) of 0.80. Results from an estimated logistic regression model showed that, comparatively younger, socially underserved farmers with lower income were more likely to adopt diversification as a risk mitigating strategy. Use of high-yielding seed, temperature volatility, high marketed surplus ratio, market demand, clustering of organised buyers and adoption of recommended processing techniques were most likely to influence adoption of vegetable diversification.
The work of Shaxon and Tauer [41] is probably one of the earliest known published empirical studies on crop diversification in Africa. Examining the effects of socio-economic variables on diversified crops computed using the Simpson diversification index (SDI) and Shannon entropy index (SEI), the authors found that neither the SDI nor the SEI was better than the other. The total land endowment of households was incorporated into the model as straight values and as squared of the straight values. Household type did not statistically influence crop diversification. Land endowment was positively related to diversification. Age was hypothesised to positively influence crop diversity. The rationale was the age of the principal operator will be linked to knowledge of the minutiae or intricacies of the farm system, of the micro-environment and the suitability of different crops to different areas. However, they found a mix of negatively and positively signed coefficients but without any statistical significance. In the case of education, field crop agriculture was taught in most primary and secondary schools with concentration on cash crops in pure stands with the use of fertilisers and pesticides. Therefore, the coefficient of education of the principal was hypothesised to be negative. The consistent negative sign of the coefficient pointed to a weak correlation between education and the diversity indices. However, the magnitudes of the coefficients were not statistically significant.
The study area is located in the cocoa belt of the Ashanti and Western Regions (WRs) of Ghana. The Ashanti Region is centrally located in the middle belt of Ghana. Located within longitudes 0.15°W and 2.25°W and latitudes 5.50°N and 7.46°N, the Ashanti region shares boundaries with four of the ten administrative regions. The WR covers an area of 23,921 km2, representing about 10% of Ghana's total land surface. Located in the south-western part of Ghana, WR is bordered by Cote d'voire on the West, Central Region on the East, Ashanti and Brong-Ahafo Regions on the North and on the South by 192 km of coastline of the Atlantic Ocean. Agriculture is the predominant occupation of the economically active population in the region, accounting for about 60% of the regional GDP, and employs about 57% of the total labour force. WR is currently the leading producer of cocoa beans in Ghana.Footnote 1
Data collection procedure
Sample sizes were determined based on the population of vegetable farmers identified in the cocoa-growing areas in each Region. A small sample formula for sample size determination (http://www.surveysystem.com/sscalc.htm) was applied to the estimated population of farmers in each district to determine appropriate sample sizes for the study (Table 1). Proportional sampling was employed to determine the sample size from each community. The sample elements were then selected randomly from a population list of vegetable farmers earlier generated.
Table 1 Description of variables
Diversification index
Varied diversification indices are available in the literature. These include composite entropy, entropy index, modified entropy index, weighted entropy, Herfindahl index, index of maximum proportion, Ogive index, Shannon index and Simpson index. Brummer et al. [6] and Ogundari [31] for example used the Herfindahl and Ogive indices to study crop diversification in Nigeria, whilst Ogbanje and Nweze [30] used entropy and weighted entropy to investigate off-farm diversification also in Nigeria. In the present study, the Simpson, Herfindahl and entropy indices were employed and a best-bet index selected based on statistical procedures. The Simpson diversification index (SDI) is specified as:
$${\text{SDI}} = 1 - \sum\limits_{k}^{K} {P_{k}^{2} }$$
where P k is the proportion of farm area devoted to a type of vegetable k. The value of SDI always falls between 0 and 1. P k = 1, for single vegetable therefore, SDI = 0. As the number of vegetable types increase, the shares (P k ) decline, as does the sum of the squared shares, so that SDI approaches 1. If there are k vegetables, then SDI falls between zero and 1 − 1/k. Farmers with most diversified vegetable farm will have the largest SDI, and those with least diversified vegetable farm are associated with the smallest SDI. For least diversified vegetable farmers (i.e., those cultivating a single vegetable) SDI takes on its minimum value of 0.
The Herfindahl index can be expressed as:
$${\text{HDI}} = \sum\limits_{j = 1}^{J} {\left( {\frac{{Y_{j} }}{{\sum\nolimits_{j = 1}^{J} {Y_{j} } }}} \right)^{2} } \quad 0 \le {\text{HDI}} \le 1$$
where Y j represents the area share of the jth vegetable cultivated in total area Y. J is the total number of vegetables cultivated on total land area. The HDI ranges from 0, reflecting complete diversification (i.e., an infinite number of vegetables in equal proportion), to 1, reflecting complete specialisation. It can be shown that this index attains a minimum value equal to 1/J. HDI can be transformed as 1 − HDI in order to have an interpretation similar to SDI. In this way, transformed HDI of 1 reflects perfect diversification, whilst 0 reflects perfect specialisation.
The Shannon entropy index of diversification is specified as:
$${\text{EDI}} = - \sum\limits_{j}^{J} {S_{j} \log \left( {\frac{1}{{S_{j} }}} \right)}$$
where S j is the proportion of area under vegetable, J is the total number of vegetables and EDI is the entropy index.
Two approaches were useful in selecting the most appropriate computed diversification indices. First, select one index from the four using factor analysis [20]. The second involves modelling each of the indices, testing each of them through inspection of model properties and rigorous tests such as P test, in order to first select the most appropriate functional form for each index, and then choose the best model from among the four selected models as the most appropriate model [10, 36]. The effort involved in the latter which provides the same results as the former makes the former appear more efficient than the latter; hence, the former approach was selected for this study.
Modelling of vegetable diversification index (VDI)
In order to investigate determinants of vegetable diversification, the following equation was estimated:
$${\text{VDI}} = f(Zm)$$
where VDI is the selected diversification index and Z are m socio-economic variables listed in Table 1.
Fractional regression modelling
The indices outlined above indicated a fractional DGP. Therefore, the use of ordinary least squares (OLS) and Tobit regression estimation procedures as proposed by Brummer et al. [6], Ogundari [31] and Ogbanje and Nweze [30] are likely to be inappropriate in the context of our study. Indeed, the use of OLS does not guarantee that predicted values will fall between zero and one. A logit transformation of the dependent variable would have been more appropriate in this context as was done by Ali [1]. However, fractional regression is certainly more appropriate since it utilises the set of numbers within the unit interval rather than only 0 and 1 boundary values as logit does. Consequently, the fractional regression approach proposed by Papke and Wooldridge [34] is appropriately employed in the context of this study.
Let y be VDI, then
$$E(y|Z) = Z$$
And the marginal effect of a unit change in Z m on VDI score is given as
$$\frac{\partial E(y|Z)}{{\partial Z_{j} }} = \theta_{j}$$
Then, the fractional regression may be specified as:
$$E(y|Z) = G(Z\theta )$$
where G(\(\bullet\)) is some nonlinear function satisfying 0 ≤ G(\(\bullet\)) ≤ 1.
We follow Ramalho et al. [36] by testing four functional forms in order to select one (best-bet) for discussion.
Let G(\(\bullet\)) be specified as any cumulative distribution function: logit, probit, loglog and cloglog.
Logit:
$$G(Z\theta ) = \frac{{{\text{e}}^{Z\theta } }}{{1 + {\text{e}}^{Z\theta } }}$$
Probit:
$$G(Z\theta ) = \varPhi (Z\theta )$$
Loglog:
$$G(Z\theta ) = {\text{e}}^{{{\text{e}}^{ - Z\theta } }}$$
Cloglog:
$$G(Z\theta )\, = 1 - {\text{e}}^{{{\text{e}}^{ - Z\theta } }}$$
with partial effect for all specifications given as
$$\frac{\partial E(y|Z)}{{\partial Z_{j} }} = \theta_{j} g(Z\theta )$$
This varies with g(Zθ) unlike in Eq. 7.
In this study, FRM is specified as:
where Z is vector of covariates; \(G( \bullet )\) is estimated as logit, probit, loglog and cloglog.
Since the data covered two administrative regions, it was important to consider controlling for regional effects. A log likelihood ratio (LR) test was performed to establish the appropriate course of action. The null hypothesis required the exclusion of the regional dummy with the alternative hypothesis supporting the inclusion of the regional dummy. The LR test was computed as LR = 2(H 1 − H 0). The LR test statistics has a Chi-square distribution; hence, the Chi-square table was used to decide on which model was appropriate using the LR test. Prior to testing, the models were estimated by maximum likelihood procedures.
Specification tests
Two tests were employed to assess the functional forms in their own right, and also as a basis for selecting the most appropriate one. The generalised goodness-of-functional form (GGOFF) test [36, 37] and P test [10] were accordingly employed. The estimation was accomplished using the STATA module developed by Ramalho [37]. The GGOFF test performs functions similar to the RESET test [38]. Whilst RESET tests assigns arbitrary number of powers of the fitted index, the GGOFF test checks the significance of the two simple functions of the fitted index. Consequently, the GGOFF is used in place of the RESET test in this study. For an exposition of details on GGOFF, see for example [36, 37].
Summary statistics of the scale variables show that the youngest vegetable farmer is aged 18 years (Table 2).
Table 2 Summary statistics of scale measured variables
On average, six people constituted a household. In terms of land area, the lowest land area is 0.08 ha (0.2 acres) and the largest is 28.34 ha (70 acres). The mean farm size of 1.21 ha (3.0 acres) clearly shows that the maximum of 28.34 ha is an outlier.
Vegetable diversification index
Table 3 shows the results of the factor analysis conducted. The first panel contains communalities and the component matrix. One minus the communalities expresses the uniqueness: the variance that is 'unique' to the variable and not shared with other variables.
Table 3 Results of factor analysis of measured variables
The high communalities show that the indices share variances; hence, the uniqueness or variances not shared is minuscule. Despite the small uniqueness generally, EDI has the highest uniqueness. Since the higher the uniqueness the lower the relevance of the variable in the factor model, the highest uniqueness of EDI makes it less relevant in the factor model. On the contrary, the SDI and HDI are more relevant in the factor model.
Turning to the next panel, factor 1 has total eigenvalue of 2.983 whilst the other two factors have values <0.01. Using the Kaiser Criterion, factor one is retained. Since SDI and HDI load on factor 1, these are the variables that constitute factor 1. By construction, the transformation of HDI equals SDI and HDI is negatively and perfectly correlated with SDI. Thus, SDI can be used in place of HDI.
The minimum SDI of 0 was recorded by vegetable farmers in both the Ashanti and WRs (Table 4). However, farmers in the WR recorded higher maximum SDI (0.80) than those in the Ashanti Region (0.75). The means are fairly similar, 0.37 and 0.41, respectively. The null hypothesis that implies indifferent means is upheld. Thus, the observed difference may well be by chance.
Table 4 Descriptive statistics of Simpson diversification index
The results in the first panel of Table 5 show that the null hypothesis which underscores that there is no regional effect is accepted. This confirms the earlier test of the difference in the means of SDI from the two regions. Consequently, regional effect was not accounted for in the four functional forms of the FRM estimates. The second panel of Table 5 shows the results of the GGOFF test. For all four functional forms, the null hypothesis that the functional forms are mis-specified is rejected. Therefore, the FRM for all functional forms are well specified.
Table 5 Hypothesis tests
Functional form selection
All four functional forms of the FRM are well specified; therefore, selecting one for discussion is not trivial. Since the functional forms are not nested forms of each other, the nested log likelihood ratio test does not apply in this case. The Davidson and MacKinnon [10] P test for non-nested models is thus applicable. Using logit as the null hypothesis and testing against the other three as alternatives, the logit function is rejected in favour of the loglog functional form. Using probit as the null hypothesis and the others as alternative hypotheses, probit is rejected in favour of loglog and cloglog functional forms. Using loglog as null hypothesis and testing against the others as alternative hypotheses, loglog is rejected in favour of cloglog and logit.
A closer examination of the statistics shows that for cloglog 4.552 is higher than 3.814 for logit although both are statistically significant. The statistical significance of the logit statistic is particularly interesting since the loglog was accepted in favour of the logit with a statistic of 7.259 at 1% level of significance. The ideal way out is to compare the magnitude of the statistics provided they are statistically significant. In that respect, loglog should be preferred to the logit functional form. Moreover, the statistics for loglog as alternative hypothesis is statistically significant at a stronger level of 1% than that of logit as alternative hypothesis at 5%.
Turning to the last column of the third panel of Table 5 with cloglog as null hypothesis and other functional forms as alternative hypotheses, the cloglog is rejected in all cases. Although all statistics are statistically significant, the magnitude for loglog is the highest among the three. Therefore, this is ranked first among the others. It is important to note, that, for all four functional forms, loglog is only rejected once, that is, when loglog was the null hypothesis.
Table 6 Estimated loglog fractional regression model
Thus, the focus of the model selection should then be between loglog and cloglog. The magnitude of the statistic for loglog as alternative hypothesis and cloglog as null hypothesis is 12.270 and significant at 1% whilst that for loglog as null hypothesis and cloglog as alternative hypothesis is 4.552 and significant at the 5% probability level. Loglog rejects cloglog stronger than cloglog rejects loglog. Thus, loglog functional form is selected for further consideration and further discussion.
Determinants of vegetable diversification
Out of the ten factor determinants investigated, seven are statistically insignificant whilst three are statistically significant (Table 6). The statistically insignificant parameters mainly relate to some household socio-economic characteristics such as age, gender, household type and level of formal education of head of household. Other variables include household size, utilisation of vegetable produce and total land endowment. The statistically significant parameters relate to cocoa cultivation, marital status of household head and total land endowment of the household.
Cocoa cultivation was measured as a dummy variable. Thus, the negative sign of the coefficients and marginal effects suggest that vegetable farmers who cultivate cocoa are more likely to diversify vegetable production. Finally, cocoa farmers do cultivate other crops on cocoa plots at the early stages of the cocoa plants. Indeed, a number of vegetable farmers noted this in their responses during the field survey. Marital status of the household head is a dummy variable designated 1 if the head is married and lives with spouse. The other extreme (designated as 0) is 'never married'. It would be recalled from Table 2 that more than 80% of the surveyed households had married household heads living with spouse. This could be a reflection of the married spouse's responsibility for providing for household nutritional needs for the entire family that would warrant the need for own produced vegetables as part of household production decisions. The coefficients and marginal effect for total land area are statistically significant and positively related to vegetable diversification. The parameter estimates of total land endowment are negative and statistically insignificant.
Respondent uses for vegetables included: for consumption, income from sales and seed production. Close to 88% of respondents use vegetables as a source of income. By comparison 11% use vegetables for consumption and 1% use vegetables produced as seed. Table 6 shows that variations in household type and household size do not significantly influence vegetable diversification. Indeed, along the continuum of these variables, the consumption, income and importance of vegetable for seed purposes are equally important.
The mean age of 41.8 years is close to the 40.33 years found by Ali [1]. The mean of 1.21 ha is slightly lower than the 1.77 ha (4.38 acres) reported by Ali [1]. The as much as 30% of the farmers who have never had formal education poses challenge for agricultural extension as training content and pedagogy would have to be tailored to the needs of these farmers so as to achieve maximum learning and ensure training impact. Ali [1] found that the largest category of vegetable farmers had secondary and/or higher secondary qualifications, inconsistent with the findings of the present study.
The mean SDI (0.39) obtained for our Ghanaian study locale is far lower than the 0.80 reported by Ali [1] for eight districts in Uttar Pradesh, India. Yet, vegetables are high-value crops and provide diverse nutrients necessary for income and nutritional food security in most parts of sub-Saharan Africa, including Ghana. In the light of these and the low levels of vegetable diversification, growing and diversification of vegetables should be encouraged among farmers in Ghana's cocoa belt. Our study results show rather low levels of diversification into vegetables within the cocoa belt of the study locale, a fact that buttresses the findings of Ganry [16], who found that in Ghana, only 49% of the 200 g per capita per day of the World Health Organisation recommended vegetable consumption is consumed on the average.
The selection of loglog in this study departs from those found in the FRM agricultural economics literature and is a point of departure for this paper. Specifically, Souza and Gomes [42] specified probit. Whilst Ogundari [32] specified logit a priori, Djokoto [11] selected logit and Ramalho et al. [36] selected clolog based on a battery of tests. Djokoto and Gidiglo [12] and Djokoto et al. [14], however, selected loglog functional form.
Four reasons may account for the statistically significant negative coefficient for the cocoa cultivation variable. First, those who cultivate cocoa have access to adequate land, either owned or rented. In the case of rented land, diversification ensures that the farmer is able to earn sufficient income and pay for land rent. In the case of share-cropping tenancy arrangements, where the landlord receives part of the produce (usually a third, locally called abusa system), higher returns are only guaranteed with more output. In the case of owned land, this is a great resource to the farmer as a major cost of production in the seasonal production gross margin computation is practically not factored in the equation. Second, given that cocoa yield and proceeds are seasonal, farmers are motivated to diversify their production from cocoa to other crops such as vegetables more so, diversifying also within vegetable production. Third, resources obtained from cocoa production are usually invested in off-farm income ventures that can be used to support vegetable production. Fourth, barring any price-taking perfect competitive tendencies in markets caused by external factors, farmers usually exercise some level of control in vegetable pricing, particularly during off-season periods unlike the case of cocoa beans, where prices are fixed and guaranteed by the Ghana Government at the commencement of each production season.
Aside of the optimal use of land resources by cocoa farmers who diversify into vegetables, it affords them the opportunity to earn diversified income when the main crop is not yet ready for harvest, particularly at the early stages (first 2–3 years) of cocoa establishment, when some vegetables can be used as shade crop for young cocoa plants. This is particularly essential for large farm households given that some leafy vegetables such as Amaranthus spp. can mature in as early as 3 weeks from planting. Cocoa farmers should therefore be encouraged to consider selecting some vegetable crops for cultivation in cocoa farms at the early stages of cocoa establishment in addition to traditional cocoa-shade crops such as plantain and cassava. Vegetable farmers without cocoa plots may consider cultivating cocoa as well. Where this is not possible, vegetable farmers can consider arrangements that will give them access to cocoa farms at the early stages of the cocoa crop; the vegetable farmer plants (diversified vegetable) during the period until the canopy of cocoa disallows such activities. Cocoa farmers may share the vegetable proceeds accordingly.
Generally, marital status creates a more likely opportunity for increased household expenditure. This increased expenditure will have to be met by higher income. Aside of this, household heads who are predominately male, have a cultural and social responsibility to cater for the monetary needs of their families and households. Households would thus have to diversify into high-value vegetables crops per unit land to earn higher net year-round income for the household rather than traditionally depending solely on the seasonal income accruing from cocoa or only one vegetable. In addition, there is a higher chance of improving and ultimately ensuring household nutrition security by way of the availability and likely intake of diverse vegetables required for ensuring a balanced diet.
As noted earlier, access to land is necessary for vegetable production as in the case of many agricultural endeavours. A larger land size implies more access to a major resource for vegetable cultivation. More land also means opportunity to cultivate different vegetables. The findings from the study call for efforts that would improve access to land as well as increased land area. Certainly, land fragmentation due to inheritance among others should be discouraged.
Whilst the negative sign suggest some influence of owned land over vegetable diversification this may be purely by chance. Indeed, owned or rented land influences vegetable diversification equally, at least from the statistical analysis view point. This result implies that the most critical determinant with respect to land use is land access (user rights) rather than owning land per se. This is underscored by the findings of Ali [1].
The statistical insignificance of utilisation of vegetable produce implies a strong role for (own) seed and consumption uses of vegetables just as income despite the disproportional percentage. These further buttress the nutritional and food security role of vegetables as noted earlier. Also, given the spatial and time gaps in the vegetable seed supply and distribution system in general, the importance of own-saved seeds as sources of planting materials for subsequent production seasons is a common phenomenon among vegetable farmers in the study locale.
The statistical insignificance of household type and household size implies diverse household types and sizes should be equally targeted with vegetable diversification efforts. The positive and statistically insignificant parameters of the education variable suggest that, although formal education may be useful in general, it is not essential in particular for vegetable diversification. Indeed, the large majority of farmers with no formal education or little formal education have diversified vegetables as much as the highly educated vegetable farmers did. Two reasons can be adduced. There are agricultural extension services and possible accumulation of experience in vegetable farming. Therefore, although formal education may be important, experience and extension support would be useful in promoting vegetable diversification for income, seed and consumption.
The statistically insignificance for gender parameter means that males diversify vegetables as much as females do. Indeed, gender disparity that might necessitate affirmative action in vegetable diversification may not be warranted. Since this study did not explicitly investigate gender division of labour, further research is required in this area. Vegetable diversification is also age-neutral. Thus, farmers of all ages tended to fairly diversify vegetables production. The sign of the parameters is consistent with the finding of Ali [1], but the statistically insignificant magnitude diverges with Ali [1].
This study assessed the extent of diversification of vegetables among farmers in Ghana's cocoa belt and identified the factors that explain the variability in the diversification indices. Unlike other studies found in the crop diversification literature, this study used econometric data reduction procedures to select the appropriate diversification index, and not only estimated the fractional regression model but selected the most appropriate functional form from the four modelled. The results show a low extent of vegetable diversification. The major determinants of vegetable diversification are cultivation of cocoa, marital status of household head and total land endowment.
There is the need to intensify integration of vegetables within cocoa-based systems among farmers in Ghana's cocoa belt. Households would thus have to diversify into high-value vegetables crops per unit land to earn higher net year-round income for the household rather than traditionally depending solely on the seasonal income accruing from cocoa or only one vegetable. In addition, there is a higher chance of improving and ultimately ensuring that household nutrition security by way of the availability and likely intake of diverse vegetables required for ensuring a balanced diet. Cocoa farmers should therefore be encouraged to consider selecting some vegetable crops for cultivation in cocoa farms at the early stages of cocoa establishment in addition to traditional cocoa-shade crops such as plantain and cassava. Vegetable farmers without cocoa plots may consider cultivating cocoa as well. Alternatively, vegetable farmers can consider arrangements that will give them access to cocoa farms at the early stages of the cocoa crop; the vegetable farmer plants (diversified vegetable) during the period until the canopy of cocoa disallows such activities. Cocoa farmers may share the vegetable proceeds accordingly. This result implies that the most critical determinant with respect to land use is land access (user rights) rather than owning land per se.
This section draws from: http://www.ghanadistricts.com.
DGP:
data generation process
EDI:
entropy index
FRM:
fractional regression model
GGOFF:
generalised goodness-of-functional form
HDI:
Harfindahl idex
LR:
log likelihood ratio test
OLS:
ordinary least squares
RESET:
regression equation specification error test
SDI:
Simpson diversification index
SEI:
Shannon entropy index
VDI:
Ali J. Adoption of diversification for risk management in vegetable cultivation. Int J Veg Sci. 2015;21:9–20.
Ali M, Byerlee D. Productivity growth and resource degradation in Pakistan's Punjab: a decomposition analysis. Econ Dev Cult Change. 2002;50:839–64.
Armah M. Investment opportunity in Ghana: chilli pepper. Accra: Millennium Development Authority; 2010. https://www.mcc.gov/documents/investmentopps/bom-ghana-english-chili.pdf. Retrieved on 29 Mar 2014.
Beets WC. Multiple cropping and tropical farming systems. Aldershot: Gower; 1982.
Birthal PS, Joshi PK, Chauhan S, Singh H. Can horticulture revitalise the agricultural growth? Indian J Agric Econ. 2009;63:310–21.
Brummer B, Glauben TG, Lu W. Policy reform and productivity change in Chinese agriculture: a distance function approach. J Dev Econ. 2006;81:61–79.
Caviglia-Harris J, Sills E. Land use and income diversification: comparing traditional and colonist population in the Brazilian Amazon. Agric Econ. 2005;32:221–37.
Chaplin H. Agricultural diversification: a review of methodological approaches and empirical evidence. Idara working paper 2/2, Wye, UK. 2000.
Das I, Dutta MK, Borbora S. Status and growth trends in area production and productivity of horticulture crops in Assam. IUP J Agric Econ. 2007;4:7–24.
Davidson R, MacKinnon JG. Several tests for model specification in the presence of alternative hypotheses. Econom J Econom Soc. 1981;49:781–93.
Djokoto JG. Technical efficiency of organic agriculture: a quantitative review. Stud Agric Econ. 2015;117:67–71.
Djokoto JG, Gidiglo KF. Technical efficiency in agribusiness: a meta-analysis on Ghana. Agribusiness. 2016;32:397–415.
Djokoto JG, Afari-Sefa V, Addo-Quaye A. Vegetable supply chains in Ghana: production constraints, opportunities and policy implications for enhancing food and nutritional security. Int J Trop Agric. 2015;33(3):2113–121
Djokoto JG, Srofenyo FY, Arthur AAA. Technical inefficiency effects in agriculture—a meta-regression. J Agric Sci. 2016;8:109–21.
Francis CA. Multiple cropping systems. New York: Macmillan; 1986.
Ganry J. Current status of fruits and vegetables production and consumption in francophone African countries—potential impact on health. Acta Hortic. 2009;841:249–56.
Gulati A, Minot N, Delgado C, Bora S. Growth in high-value agriculture in Asia and the emergence of vertical links with farmers. In: Swinnen JFM, editor. Global supply chains, standards and the poor: how the globalization of food systems and standards affects rural development and poverty. Wallingford: CABI; 2007. p. 91–108.
Gunasena HMM. Intensification of crop diversification in the Asia-Pacific Region. In: Papedemetrion M, Dent F, editors. Proceeding of the paper presented at the Food and Agriculture Organisation FAO. Sponsored expert consultation on "Crop Diversification in the Asia-pacific Region", Held in Bangkok, Thailand, 4–6 July, FAO, Rome, 2001 publication. 2000.
Gyau A, Spiller A. Determinants of trust in the international fresh produce business between Ghana and Europe. Int Bus Manage. 2007;1(4):104–11
Harman HH. Modern factor analysis. Oxford: University of Chicago Press; 1960.
Johnston GW, Vaupel S, Kegel FR, Cadet M. Crop and farm diversification provide social benefits. Calif Agric. 1995;49:10–6.
Jones BA, Madden GJ, Wengreen HJ. The FIT Game: preliminary evaluation of a gamification approach to increasing fruit and vegetable consumption in school. Prev Med. 2014;68:76–9
Joshi PK, Gulati A, Birthal PS, Tewari L. Agriculture diversification in South Asia: patterns, determinants and policy implications. Econ Polit Wkly. 2004;39:2457–67.
Joshi PK, Joshi L, Birthal PS. Diversification and its impact on smallholders: evidence from a study on vegetable production. Agric Econ Res Rev. 2006;19:219–36.
Kass DC. Polyculture cropping systems: review and analysis. Cornell Int Agric Bull USA 1978;32:1–69.
Malton PJ, Fafchmaps M. Crop budgets in three agro-ecological zones of West Africa. Economics Group Project Report, Patancheru India: ICRISAT, Hyderabad, India. 1988.
Markowitz HM. Foundations of portfolio theory. J financ. 1991;46(2):469–77
Norman MJT. Annual cropping systems in the tropics. Gainesville: University Presses of Florida; 1979.
Ntow WJ, Gijzen HJ, Kelderman P, Drechsel P. Farmer perceptions and pesticide use practices in vegetable production in Ghana. Pest Manag Sci. 2006;62:356–65.
Ogbanje CE, Nweze NJ. Off-farm diversification among small-scale farmers in North Central Nigeria. J Econ Sustain Dev. 2014;5:136–44.
Ogundari K. Crop diversification and technical efficiency in food crop production. Int J Soc Econ. 2013;40:267–87.
Ogundari K. The paradigm of agricultural efficiency and its implication on food security in Africa: what does meta-analysis reveal? World Dev. 2014;64:690–702.
Owusu-Boateng G, Amuzu KK. A survey of some critical issues in vegetable crops farming along River Oyansia in Opeibea and Dzorwulu, Accra-Ghana. Glob Adv Res J Phys Appl Sci. 2013;2:24–31.
Papke LE, Wooldridge JM. Econometric methods for fractional response variables with an application to 401k plan participation rates. Journal of Applied Econom. 1996;11:619–32.
Paul CJM, Nehring R. Product diversification, production systems, and economic performance in US agricultural production. J Econom. 2005;126:525–48.
Ramalho EA, Ramalho JJ, Henriques PD. Fractional regression models for second stage DEA efficiency analyses. J Prod Anal. 2010;34:239–55.
Ramalho JJ. FRM: STATA module to estimate and test fractional regression models. Statistical Software Components. 2014. http://ideas.repec.org/c/boc/bocode/s457542.html. Accessed 15 May 2015.
Ramsey JB. Tests for specification errors in classical linear least-squares regression analysis. J R Stat Soc Ser B (Methodological) 1969;31(2):350–71.
Ryan JG, Spencer DC. Future challenges and opportunities for agricultural R&D in the semi-arid tropics. Hyderabad: International Crops Research Institute for the Semi-Arid Tropics ICRISAT; 2001.
Sanderson MA, Archer D, Hendrickson J, Kronberg S, Liebig M, Nichols K, Schmer M, Tanaka D, Aguilar J. Diversification and ecosystem services for conservation agriculture: outcomes from pastures and integrated crop–livestock systems. Renew Agric Food Syst. 2013;28:129–44.
Shaxon L, Tauer LW. Intercropping and diversify: an economic analysis of cropping patterns on smallholder farms in Malawi. Exp Agric. 1992;28:211–28.
Souza GDS, Gomes EG. Fractional regression models for assessing the significance of contextual variables in output oriented DEA models. In: Paper presented at Congreso Latino-Iberoamericano de Investigacion Opetativa; Simpossio Brasileiron de Pesquisa Operacional. September 24–28, Rio de Janeiro, Brazil. 2012.
Swinnen JFM, Maertens M. Globalisation, privatisation, and vertical coordination in food value chains in developing and transition countries. Agric Econ. 2007;37:89–102.
Weinberger K, Lumpkin T. Horticulture, poverty reduction and a research agenda. World Dev. 2007;35:1464–80.
JGD helped in data collection, data analysis and drafting paper. VA-S was involved in the review and beefing of the paper. AA-Q helped in data collection and review. All authors read and approved the final manuscript.
We would like to acknowledge Humidtropics and the CGIAR Fund Donors for their provision of core and project-specific funding through the World Vegetable Center and other partners without which this research could not deliver results that eventually positively impact the lives of millions of smallholder farmers in tropical Americas, Asia and Africa.
Data available on request.
Department of Agribusiness Management, Central Business School, Central University College, P. O. Box DS 2310, Dansoman, Ghana
Justice G. Djokoto
World Vegetable Center, West and Central Africa, Samanko Research Station, BP 320, Bamako, Mali
Victor Afari-Sefa
Department of Agriculture, College of Agriculture and Life Sciences, Anglican University College of Technology, P. O. Box 78, Nkoranza, Ghana
Albert Addo-Quaye
Search for Justice G. Djokoto in:
Search for Victor Afari-Sefa in:
Search for Albert Addo-Quaye in:
Correspondence to Justice G. Djokoto.
Djokoto, J.G., Afari-Sefa, V. & Addo-Quaye, A. Vegetable diversification in cocoa-based farming systems Ghana. Agric & Food Secur 6, 6 (2017) doi:10.1186/s40066-016-0082-4
Received: 25 May 2016
Cocoa Fractional regression
Vegetable diversification | CommonCrawl |
A deformation theorem for the Kobayashi metric
by M. Kalka PDF
Proc. Amer. Math. Soc. 59 (1976), 245-251 Request permission
Let ${M_0}$ be a compact hyperbolic complex manifold. It is shown that the infinitesimal Kobayashi metric is upper semicontinuous in a ${C^\infty }$ deformation parameter $t \in U \subseteq {R^k}$. This is accomplished by proving deformation theorems for holomorphic maps.
R. Brody, Thesis, Harvard Univ., Cambridge, Mass., June, 1975.
Earl A. Coddington and Norman Levinson, Theory of ordinary differential equations, McGraw-Hill Book Co., Inc., New York-Toronto-London, 1955. MR 0069338
G. B. Folland and J. J. Kohn, The Neumann problem for the Cauchy-Riemann complex, Annals of Mathematics Studies, No. 75, Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1972. MR 0461588
Phillip A. Griffiths, Differential geometry and complex analysis, Differential geometry (Proc. Sympos. Pure Math., Vol. XXVII, Part 2, Stanford Univ., Stanford, Calif., 1973) Amer. Math. Soc., Providence, R.I., 1975, pp. 43–64. MR 0399521
Robert C. Gunning and Hugo Rossi, Analytic functions of several complex variables, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1965. MR 0180696
H. L. Royden, Remarks on the Kobayashi metric, Several complex variables, II (Proc. Internat. Conf., Univ. Maryland, College Park, Md., 1970) Lecture Notes in Math., Vol. 185, Springer, Berlin, 1971, pp. 125–137. MR 0304694
H. L. Royden, The extension of regular holomorphic maps, Proc. Amer. Math. Soc. 43 (1974), 306–310. MR 335851, DOI 10.1090/S0002-9939-1974-0335851-X
Chester Seabury, Some extension theorems for regular maps of Stein manifolds, Bull. Amer. Math. Soc. 80 (1974), 1223–1224. MR 350072, DOI 10.1090/S0002-9904-1974-13688-8
M. Wright, The Kobayashi pseudo-metric on algebraic manifolds of general type and in deformations of complex manifolds, Trans. Amer. Math. Soc. (to appear)
Retrieve articles in Proceedings of the American Mathematical Society with MSC: 32H15, 32G05
Retrieve articles in all journals with MSC: 32H15, 32G05
Journal: Proc. Amer. Math. Soc. 59 (1976), 245-251
MSC: Primary 32H15; Secondary 32G05
DOI: https://doi.org/10.1090/S0002-9939-1976-0412481-4 | CommonCrawl |
Positive topological entropy for Reeb flows on 3-dimensional Anosov contact manifolds
JMD Home
This Volume
Smooth diffeomorphisms with homogeneous spectrum and disjointness of convolutions
2016, 10: 483-495. doi: 10.3934/jmd.2016.10.483
The automorphism group of a minimal shift of stretched exponential growth
Van Cyr 1, and Bryna Kra 2,
Department of Mathematics, Bucknell University, 1 Dent Drive, Lewisburg, PA 17837, United States
Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208, United States
Received September 2015 Revised August 2016 Published October 2016
The group of automorphisms of a symbolic dynamical system is countable, but often very large. For example, for a mixing subshift of finite type, the automorphism group contains isomorphic copies of the free group on two generators and the direct sum of countably many copies of $\mathbb{Z}$. In contrast, the group of automorphisms of a symbolic system of zero entropy seems to be highly constrained. Our main result is that the automorphism group of any minimal subshift of stretched exponential growth with exponent $<1/2$, is amenable (as a countable discrete group). For shifts of polynomial growth, we further show that any finitely generated, torsion free subgroup of Aut(X) is virtually nilpotent.
Keywords: zero entropy, automorphism, Subshift, amenable., block complexity.
Mathematics Subject Classification: Primary: 37B10; Secondary: 43A07, 54H20, 68R1.
Citation: Van Cyr, Bryna Kra. The automorphism group of a minimal shift of stretched exponential growth. Journal of Modern Dynamics, 2016, 10: 483-495. doi: 10.3934/jmd.2016.10.483
H. Bass, The degree of polynomial growth of finitely generated nilpotent groups,, Proc. London Math. Soc. (3), 25 (1972), 603. Google Scholar
M. Boyle, D. Lind and D. Rudolph, The automorphism group of a shift of finite type,, Trans. Amer. Math. Soc., 306 (1988), 71. doi: 10.1090/S0002-9947-1988-0927684-2. Google Scholar
E. Coven, A. Quas and R. Yassawi, Computing automorphism groups of shifts, using atypical equivalence classes,, Discrete Anal., (2016), 1. doi: 10.19086/da.611. Google Scholar
V. Cyr and B. Kra, The automorphism group of a shift of subquadratic growth,, Proc. Amer. Math. Soc., 144 (2016), 613. doi: 10.1090/proc12719. Google Scholar
V. Cyr and B. Kra, The automorphism group of a shift of linear growth: beyond transitivity,, Forum Math. Sigma, 3 (2015). doi: 10.1017/fms.2015.3. Google Scholar
P. de la Harpe, Topics in Geometric Group Theory,, Chicago Lectures in Mathematics, (2000). Google Scholar
S. Donoso, F. Durand, A. Maass and S. Petite, On automorphism groups of low complexity subshifts,, Ergodic Theory Dynam. Systems, 36 (2016), 64. doi: 10.1017/etds.2015.70. Google Scholar
S. Donoso, F. Durand, A. Maass and S. Petite, Private, communication., (). Google Scholar
M. Gromov, Groups of polynomial growth and expanding maps,, Inst. Hautes Études Sci. Publ. Math., 53 (1981), 53. Google Scholar
Y. Guivarc'h, Groupes de Lie á croissance polynomiale,, C. R. Acad. Sci. Paris Sér. A-B, 272 (1971). Google Scholar
G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system,, Math. Systems Theory, 3 (1969), 320. doi: 10.1007/BF01691062. Google Scholar
M. Morse and G. A. Hedlund, Symbolic dynamics II. Sturmian trajectories,, Amer. J. Math., 62 (1940), 1. doi: 10.2307/2371431. Google Scholar
K. H. Kim and F. W. Roush, On the automorphism groups of subshifts,, Pure Math. Appl. Ser. B, 1 (1990), 203. Google Scholar
V. Salo, Toeplitz subshift whose automorphism group is not finitely generated,, Colloquium Mathematicum, (2016). doi: 10.4064/cm6463-2-2016. Google Scholar
V. Salo and I. Törmä, Block maps between primitive uniform and Pisot substitutions,, Ergodic Theory and Dynam. Systems, 35 (2015), 2292. doi: 10.1017/etds.2014.29. Google Scholar
L. van den Dries and A. Wilkie, Gromov's theorem on groups of polynomial growth and elementary logic,, J. Algebra, 89 (1984), 349. doi: 10.1016/0021-8693(84)90223-0. Google Scholar
Silvère Gangloff, Benjamin Hellouin de Menibus. Effect of quantified irreducibility on the computability of subshift entropy. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1975-2000. doi: 10.3934/dcds.2019083
Xiaojun Huang, Jinsong Liu, Changrong Zhu. The Katok's entropy formula for amenable group actions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4467-4482. doi: 10.3934/dcds.2018195
Wenxiang Sun, Cheng Zhang. Zero entropy versus infinite entropy. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1237-1242. doi: 10.3934/dcds.2011.30.1237
Yixiao Qiao, Xiaoyao Zhou. Zero sequence entropy and entropy dimension. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 435-448. doi: 10.3934/dcds.2017018
Valentin Afraimovich, Maurice Courbage, Lev Glebsky. Directional complexity and entropy for lift mappings. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3385-3401. doi: 10.3934/dcdsb.2015.20.3385
Erik M. Bollt, Joseph D. Skufca, Stephen J . McGregor. Control entropy: A complexity measure for nonstationary signals. Mathematical Biosciences & Engineering, 2009, 6 (1) : 1-25. doi: 10.3934/mbe.2009.6.1
Jean-Paul Thouvenot. The work of Lewis Bowen on the entropy theory of non-amenable group actions. Journal of Modern Dynamics, 2019, 15: 133-141. doi: 10.3934/jmd.2019016
Dino Festi, Alice Garbagnati, Bert Van Geemen, Ronald Van Luijk. The Cayley-Oguiso automorphism of positive entropy on a K3 surface. Journal of Modern Dynamics, 2013, 7 (1) : 75-97. doi: 10.3934/jmd.2013.7.75
Paulina Grzegorek, Michal Kupsa. Exponential return times in a zero-entropy process. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1339-1361. doi: 10.3934/cpaa.2012.11.1339
Bin Li, Hai Huyen Dam, Antonio Cantoni. A low-complexity zero-forcing Beamformer design for multiuser MIMO systems via a dual gradient method. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 297-304. doi: 10.3934/naco.2016012
A. Crannell. A chaotic, non-mixing subshift. Conference Publications, 1998, 1998 (Special) : 195-202. doi: 10.3934/proc.1998.1998.195
Ghassen Askri. Li-Yorke chaos for dendrite maps with zero topological entropy and ω-limit sets. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 2957-2976. doi: 10.3934/dcds.2017127
L'ubomír Snoha, Vladimír Špitalský. Recurrence equals uniform recurrence does not imply zero entropy for triangular maps of the square. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 821-835. doi: 10.3934/dcds.2006.14.821
Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5: 82-90.
Van Cyr, John Franks, Bryna Kra, Samuel Petite. Distortion and the automorphism group of a shift. Journal of Modern Dynamics, 2018, 13: 147-161. doi: 10.3934/jmd.2018015
Dongmei Zheng, Ercai Chen, Jiahong Yang. On large deviations for amenable group actions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7191-7206. doi: 10.3934/dcds.2016113
Fernando Alcalde Cuesta, Ana Rechtman. Minimal Følner foliations are amenable. Discrete & Continuous Dynamical Systems - A, 2011, 31 (3) : 685-707. doi: 10.3934/dcds.2011.31.685
Arvind Ayyer, Carlangelo Liverani, Mikko Stenlund. Quenched CLT for random toral automorphism. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 331-348. doi: 10.3934/dcds.2009.24.331
Stefano Galatolo. Orbit complexity and data compression. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 477-486. doi: 10.3934/dcds.2001.7.477
Valentin Afraimovich, Lev Glebsky, Rosendo Vazquez. Measures related to metric complexity. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1299-1309. doi: 10.3934/dcds.2010.28.1299
Van Cyr Bryna Kra | CommonCrawl |
Journal of Animal Science and Biotechnology
Multi-omics-data-assisted genomic feature markers preselection improves the accuracy of genomic prediction
Shaopan Ye1,
Jiaqi Li1 &
Zhe Zhang ORCID: orcid.org/0000-0001-7338-77181
Journal of Animal Science and Biotechnology volume 11, Article number: 109 (2020) Cite this article
Presently, multi-omics data (e.g., genomics, transcriptomics, proteomics, and metabolomics) are available to improve genomic predictors. Omics data not only offers new data layers for genomic prediction but also provides a bridge between organismal phenotypes and genome variation that cannot be readily captured at the genome sequence level. Therefore, using multi-omics data to select feature markers is a feasible strategy to improve the accuracy of genomic prediction. In this study, simultaneously using whole-genome sequencing (WGS) and gene expression level data, four strategies for single-nucleotide polymorphism (SNP) preselection were investigated for genomic predictions in the Drosophila Genetic Reference Panel.
Using genomic best linear unbiased prediction (GBLUP) with complete WGS data, the prediction accuracies were 0.208 ± 0.020 (0.181 ± 0.022) for the startle response and 0.272 ± 0.017 (0.307 ± 0.015) for starvation resistance in the female (male) lines. Compared with GBLUP using complete WGS data, both GBLUP and the genomic feature BLUP (GFBLUP) did not improve the prediction accuracy using SNPs preselected from complete WGS data based on the results of genome-wide association studies (GWASs) or transcriptome-wide association studies (TWASs). Furthermore, by using SNPs preselected from the WGS data based on the results of the expression quantitative trait locus (eQTL) mapping of all genes, only the startle response had greater accuracy than GBLUP with the complete WGS data. The best accuracy values in the female and male lines were 0.243 ± 0.020 and 0.220 ± 0.022, respectively. Importantly, by using SNPs preselected based on the results of the eQTL mapping of significant genes from TWAS, both GBLUP and GFBLUP resulted in great accuracy and small bias of genomic prediction. Compared with the GBLUP using complete WGS data, the best accuracy values represented increases of 60.66% and 39.09% for the starvation resistance and 27.40% and 35.36% for startle response in the female and male lines, respectively.
Overall, multi-omics data can assist genomic feature preselection and improve the performance of genomic prediction. The new knowledge gained from this study will enrich the use of multi-omics in genomic prediction.
Genomic prediction, also known as genomic selection (GS), was initially proposed in 2001 [1] and is a statistical method to predict the yet-to-be observed phenotypes or unobserved genetic values of complex traits based on genomic data. This method assumes that all quantitative trait loci (QTLs) are in linkage disequilibrium (LD) with at least one marker in the whole genome. GS is known for shortening the generation intervals and increasing the reliability of predicted breeding values, especially for dairy cattle breeding [2]. Presently, genomic prediction is widely used in animal and plant breeding and polygenic disease risk prediction.
Over the past decade, the implementation of GS was mainly based on single-nucleotide polymorphism (SNP) chip data. With the cost of sequencing dropping rapidly, it became possible to perform genomic predictions with whole-genome sequencing (WGS) data. Compared with SNP chip data, WGS data are expected to improve the accuracy of genomic predictions by increasing the degree of LD between the SNPs and QTLs, even including causal mutations. Simulation studies confirmed the hypothesis that WGS data would improve the accuracy of genomic prediction in a single population [3] or multiple populations [4]. However, higher accuracy of genomic prediction was not achieved for Drosophila using real WGS data [5], and similar results were found for livestock using imputed WGS data [6,7,8]. Possibly, large amounts of markers are both non-causal markers and not in LD with the causal loci. Moreover, our previous study indicated that the LD pruning of imputed WGS data could improve prediction accuracy [8]. Therefore, pre-selected potential causal markers or QTLs from WGS has great potential for improving the accuracy of genomic prediction [9]. Nowadays, many preselection variant strategies were used to improve the power of genomic prediction based on the following methods: genome-wide association study (GWAS) [8, 10,11,12], Bayesian procedures [13], genome-wide signatures of selection [14], QTL regions in Animal QTLdb [12], gene annotation [15, 16], and gene ontology categories [17, 18]. These methods mainly depend on the direct link between phenotype and DNA variants or some prior genome annotation information. However, the genetic links between phenotype and genome variants are too complex to determine directly at the genome sequencing level.
Presently, it has become possible to obtain multi-omics data (e.g., genomic, transcriptomics, proteomics, and metabolomics) for genomic predictions. This makes it possible to uncover genotype–phenotype relationships using different types of data. Related studies were reported using omics data to perform genomic prediction for complex traits in humans [19, 20], plants [21,22,23,24], and model animals [25, 26]. Most of these studies focused on integrating multiple omics data into a prediction model to improve prediction accuracy [22, 25,26,27]. However, multi-omics data not only offers new data layers for genomic prediction but also provides a bridge between organismal phenotype and genome variation that cannot be readily captured at the genome sequence level [21]. Therefore, using omics data to select feature markers is a feasible strategy to improve the accuracy of genomic prediction.
In this study, using WGS and gene expression level data, different strategies of SNP preselection were investigated for genomic predictions in the Drosophila genetic reference panel (DGRP). Our results provide useful knowledge about preselected genomic features based on multi-omics data and thus improve the predictive ability of genomic predictions for complex traits.
The genomic, transcriptomic, and phenotypic data of DGRP lines
The DGRP is a living library of common polymorphisms affecting complex traits, as well as a community resource for the whole genome association mapping of quantitative trait loci [28, 29]. The DGRP has 205 Drosophila inbred lines derived from 20 generations of full-sib mating from isofemale lines collected at the Farmer's Market in Raleigh, NC, USA. These 205 lines were subjected to whole genome sequencing using Illumina and 454 sequencing. After variant calling, a total of 4,672,297 SNPs were found around the chromosome arm (X, 2L, 2R, 3L, 3R, 4) [28]. The gene expression level of 200 DGRP lines (as the log2-transformed fragments per kilobase of transcript per million fragments mapped, FPKM) for 15,732 genes in females and 20,375 genes in males were obtained by Everett et al. [30] and can be found in GEO (accession GSE117850). Furthermore, two traits (startle response and starvation resistance) were selected as model traits. Finally, totals of 198 and 199 lines selected for starvation resistance and startle response, respectively, were used for further genomic prediction due to allowing the measurement of phenotypes and expression levels simultaneously. In addition,the phenotypic value of Startle response and starvation resistance per line were the averages of two replicate measurements (20 flies/sex/replicate) and five replicate measurements (10 flies/sex/replicate), respectively [28]. The quality control of the WGS data was conducted using PLINK [31] with the criteria of SNP call rate ≥ 95%, individual call rate ≥ 97%, MAF ≥ 5%, and the Hardy–Weinberg equilibrium P-value ≥ 1.0e-6. The missing genotypes were imputed by Beagle 4.1 with default parameters [32]. Ultimately, a total of 2,037,712 SNPs was used for further analysis.
Genetic parameter estimations
Before performing genomic prediction, in order to assess how much phenotypic variability could be explained by the genetic variation of the WGS data, the variance components (additive genetic and residual variance) of the startle response and starvation resistance were estimated in the male and female lines, respectively, by the information restricted maximum likelihood (REML) method implemented in the LDAK software [33]. The statistical model was
$$ \mathbf{y}=\boldsymbol{X}\mathbf{b}+\boldsymbol{Z}\mathbf{g}+\mathbf{e}, $$
where y is a vector of the phenotypic values of all lines; b is the Wolbachia infection status as a fixed effect; X and Z are the incidence matrices relating the fixed and polygene effects to the phenotypic records; g is a vector of the polygene effect of all individuals, which is assumed to be distributed as \( \mathbf{g}\sim \mathbf{N}\left(\mathbf{0},{\boldsymbol{\upsigma}}_{\mathbf{g}}^{\mathbf{2}}\mathbf{G}\right) \); and e is the residual term, which is assumed to follow a normal distribution of \( \mathbf{e}\sim \mathbf{N}\left(\mathbf{0},{\boldsymbol{\upsigma}}_{\mathbf{e}}^{\mathbf{2}}\mathbf{I}\right) \). In addition, G is the standardized relatedness matrix calculated by GEMMA v0.98.1 software [34] using all SNPs according to [35]:
$$ \mathbf{G}=\frac{\boldsymbol{M}{\boldsymbol{M}}^T}{2{\sum}_{i=1}^m{p}_i\left(1-{p}_i\right)}, $$
where M is a matrix of centered genotypes, and pi is the minor allele frequency of SNPi.
Strategies for selecting the feature markers in genomic prediction
In order to improve the predictive ability of whole genome prediction, four strategies were used to preselect SNPs from the WGS data as genomic feature markers, including 1) SNPs preselection based on the GWAS results (abbreviation as "S_ GWAS"); 2) SNPs preselection based on the genome positions of significant genes from the transcriptome-wide association study (TWAS) (abbreviation as "S_ TWAS"); 3) SNPs preselection based on the results of the eQTL mapping of all genes (abbreviation as "S_eQTL_A"); and 4) SNPs preselection based on the results of the eQTL mapping of significant genes from TWAS (abbreviation as "S_eQTL_S"). In all scenarios, if there was no gene or SNP remained after the cut-off thresholds of different categories, the top two genes or five SNPs were exacted as feature markers.
SNPs preselection based on the GWAS results (S_GWAS)
In order to link genomic variation with complex traits, GWASs were performed for each sex separately for the analyzed traits in the training population. Univariate tests of association were performed using a mixed model approach implemented in the GEMMA v0.98.1 software [34]. The model was
$$ \mathbf{y}=\mathbf{Xb}+\mathbf{Zg}+\mathbf{Sa}+\mathbf{e}, $$
where y is a vector of the phenotypic values of lines in the training set; a is the additive effect of the candidate variants to be tested for association; S is a vector of an SNP, and the other terms are defined as above. A Wald test was applied to test the alternative hypotheses of each SNP in the univariate models. After the GWAS analysis, the SNPs associated with related traits were divided into different categories based on P values of less than 0.05, 0.001, 0.0001, 0.00001, or 0.000001. Then, the different categories of significant SNPs were extracted from the WGS data as genomic features, respectively.
SNPs preselection based on the genome position of significant genes from TWAS (S_TWAS)
In order to link the gene expression level with complex traits, TWASs were performed for each sex separately for the analyzed traits in the training population. The univariate tests of association were performed using a mixed model approach implemented in 'rMVP', a package in R (https://github.com/xiaolei-lab/rMVP). The model was
$$ \mathbf{y}=\mathbf{Xb}+\boldsymbol{Z}\mathbf{g}+\boldsymbol{T}\mathbf{u}+\mathbf{e}, $$
where y is a vector of the phenotypic values of lines in the training set; T is a vector of a gene expression level of lines in the training set; u is the genetic effect of the candidate genes to be tested for association. and the other terms are defined as above. A Wald test was applied to test the alternative hypotheses of each gene in the univariate models. After the TWAS analysis, the significant gene expression levels associated with related traits were divided into different categories based on P values of less than 0.05, 0.001, 0.0001, 0.00001, or 0.000001. Then, the SNPs located in significant genes were extracted as feature markers based on their corresponding genomic positions from the WGS data.
SNPs preselection based on the results of the eQTL mapping of all genes (S_eQTL_A)
In order to link genome variation with the gene expression level, eQTL mapping was performed for each sex separately for each gene expression level using the WGS data. Univariate tests of association were performed using a mixed model approach implemented in the GEMMA v0.98.1 software [34]. The model was
where y is a vector of each gene expression level of all lines; b is the fixed effect, including Wolbachia infection status and five major polymorphic inversions [In2L(t), In2R(NS), In3R(P), In3R(K), and In3R(Mo)]; S is a vector of the SNP, and the other terms are defined as above. A Wald test was applied to test the alternative hypotheses of each SNP in the univariate models. After eQTL mapping, the significant eQTLs of each gene were divided into different categories based on P values of less than 0.05, 0.001, 0.0001, 0.00001, or 0.000001. Then, the different categories of significant eQTLs of each gene were extracted as feature markers from the WGS data, respectively. Because polymorphic inversions had a direct impact on gene expression, moreover, for avoiding spurious associations due to adjustment on both eQTL mapping and TWAS, we added five major polymorphic inversions as fixed effect only in eQTL mapping.
SNPs preselection based on the results of the eQTL mapping of significant genes (S_eQTL_S)
After the TWAS and eQTL mapping analysis, genes and eQTLs were divided into different categories according to the significance threshold as described above. Using different categories of combination, these significant eQTLs of significant genes were extracted as feature markers from the WGS data, respectively.
Genomic prediction model
The breeding values of the genotyped individuals were estimated via genomic best linear unbiased prediction (GBLUP) [35] and a genomic feature BLUP model (GFBLUP) [36]. The statistical model for the GBLUP approaches is
$$ \mathbf{y}=\boldsymbol{Xb}+\boldsymbol{Zg}+\boldsymbol{e}, $$
where y is a vector of the phenotypic values; b is Walachia infection status as a fixed effect; and the other parameters are defined as above.
The GFBLUP model was an extended BLUP including two random genetic effects:
$$ \mathbf{y}=\boldsymbol{Xb}+{\boldsymbol{Z}}_{\mathbf{1}}\boldsymbol{f}+{\boldsymbol{Z}}_{\mathbf{2}}\boldsymbol{r}+\boldsymbol{e}, $$
where y, b, X, and e are the same as GBLUP, f is the vector of the genomic values captured by the genetic markers linked to the genomic feature of interest, following a normal distribution of \( \boldsymbol{f}\sim \mathbf{N}\left(\mathbf{0},{\boldsymbol{\upsigma}}_{\boldsymbol{f}}^{\mathbf{2}}{\boldsymbol{G}}_{\boldsymbol{f}}\right) \); and r is a vector of genomic values captured by the remaining set of genetic markers, following a normal distribution of \( \boldsymbol{r}\sim \mathbf{N}\left(\mathbf{0},{\boldsymbol{\upsigma}}_{\boldsymbol{r}}^{\mathbf{2}}{\boldsymbol{G}}_{\boldsymbol{r}}\right) \). Z1 and Z2 are the incidence matrices relating the additive genetic values (g and f) to the phenotypic records. Gf and Gr were constructed according to [35] using the preselected and remaining markers, respectively.
In this study, the variance components were estimated in the training set using the REML algorithm via the LDAK software [33]. Finally, using the dispersion matrices as define in [37] and the variance components, predictions of genetic values of testing sets were obtained by solving the mixed model equations.
Predictive ability evaluation
The Pearson's correlation and regression coefficients between the predicted genetic values and the true phenotypic values were used to assess the accuracy and the bias of genomic prediction. The true phenotypic values represent the fixed effects of original phenotypic observations were corrected. Ten replicates of five-fold cross-validation were used to avoid the uncertainty of predictive correlations in this study. Briefly, the genotyped individuals were randomly divided into five subsets. One subset was selected as the validation set, and the remaining four were used as the reference set. Then the cross-validation process was repeated five times to ensure that each subset was validated once. Finally, the average accuracy values and the bias of genomic prediction for the ten replicates of five-fold cross-validation were reported.
Summary statistics and genetic parameter estimations of the analyzed traits
Before performing genomic prediction, the summary of statistics and genetic parameter estimations for the traits were performed in the male and female lines, and the detailed results were shown in Table 1. The results showed that the times of the startle response in the female lines (average 28.68 s; range: 14.13–41.25) were similar to those in the male lines (average 28.25 s; range: 13.38–42.10). However, the times of starvation resistance in the female lines (average 60.43 h; range: 34.45–106.56) were much longer than those in the male lines (average 45.52 h; range: 21.28–72.00). The standard deviations were 6.37 and 6.45 for the startle response and 12.61 and 9.40 for starvation resistance in the female and male lines, respectively. The coefficients of variation were 22.21% and 22.83% for the startle response and 20.87% and 20.65% for starvation resistance in the male and female lines, respectively. This indicated that substantial phenotypic variation exists among these traits. Furthermore, the values (standard error) of the heritability estimates were 0.771 (0.191) and 0.691 (0.222) for the startle response and 0.999 (0.083) and 0.999 (0.071) for starvation resistance in the male and female lines, respectively, indicating that they are high-heritability traits. Using likelihood ratio tests, the levels of significance of the heritability estimates were 0.003 and 0.011 for the startle response and 0.0002 and 0.00002 for starvation resistance, indicating a significant genetic contribution to phenotypic variability.
Table 1 Summary statistics and genetic parameter estimations of the analyzed traits
SNPs preselection based on the GWAS results (S_GWAS) with different P-value cutoffs for genomic prediction
Using S_GWAS with different P-value cutoffs, the accuracy values of both GBLUP and GFBLUP were shown in Table 2. When GBLUP was performed using the complete WGS data, the prediction accuracy values were 0.208 ± 0.020 and 0.181 ± 0.022 for the startle response and 0.272 ± 0.017 and 0.307 ± 0.015 for starvation resistance in the female and male lines, respectively (Table 2). Using S_GWAS with the optimal P-value cutoffs (P < 0.05), the accuracy values of GBLUP were 0.186 ± 0.021 and 0.158 ± 0.022 for the startle response and 0.207 ± 0.020 and 0.268 ± 0.020 for starvation resistance in the female and male lines, respectively (Table 2). These accuracy values, however, were still lower than those of GBLUP with the complete WGS data. Furthermore, when was using S_GWAS to perform the genomic prediction, the accuracy of GBLUP increased with the P-value cutoffs (Table 2). In other words, the accuracy of GBLUP increased with the number of SNPs (Table S2). For example, the number of SNPs increased from 11 to 100,708, meanwhile, the accuracy of GBLUP increased from 0.066 to 0.186 for Startle Response in the female lines. Using S_GWAS with the optimal P-value cutoffs, the accuracy of GFBLUP was much lower than that of GBLUP (Table 2). In addition, there was no obvious trend for the accuracy of GFBLUP using different P-value cutoffs to preselect SNPs. Overall, using S_GWAS provided lower accuracy and a larger bias of genomic prediction compared to using the complete WGS data for both GBLUP and GFBULP (Table 2, Table S1).
Table 2 Prediction accuracies using SNPs preselection based on GWAS results (S_GWAS)
SNPs preselection based on the TWAS results (T_GWAS) with different P-value cutoffs for genomic prediction
The accuracy of both GBLUP and GFBLUP using S_TWAS with different P-value cutoffs was shown in Table 3. The results showed that the accuracy of GBLUP using S_TWAS with the optimal P-value cutoffs (p < 0.05) was 0.189 ± 0.022 and 0.118 ± 0.022 for the startle response and 0.128 ± 0.017 and 0.196 ± 0.015 for starvation resistance in the female and male lines, respectively (Table 3). However, compared with the complete WGS data, the accuracy of GBLUP cannot be improved by using S_TWAS. In addition, by using S_TWAS to perform genomic prediction, the accuracy of GBLUP always increased with the P-value cutoffs or number of SNPs (Table 3 and Table S4), for example, the number of SNPs increased from 594 to 70,285 the accuracy of GBLUP increased from − 0.001 to 0.189 for Startle Response in the female lines. Compared with the GBLUP with S_TWAS, GFBLUP resulted in higher accuracy and smaller bias of genomic prediction, except for the startle response using P-value cutoffs less than 0.05 (Table 3 and Table S3). But these accuracies still did not higher than using the complete WGS data in GBLUP. However, by using P-value cutoffs less than 0.0001 to preselect the SNPs, the accuracy of GFBLUP was equal to the accuracy of GBLUP with the complete WGS data (Table 3), but the bias of GFBLUP was smaller than that of GBLUP with the complete WGS data (Table S3).
Table 3 Prediction accuracies using SNPs preselection based on TWAS results (S_TWAS)
SNPs preselection based on the eQTL mapping results of all genes (S_eQTL_A) with different P-value cutoffs for genomic prediction
The accuracy of both GBLUP and GFBLUP using S_eQTL_A with different P-value cutoffs was shown in Table 4. The results showed that the accuracy of GBLUP using S_eQTL_A with the optimal P-value cutoffs was 0.243 ± 0.020 and 0.220 ± 0.022 for the startle response and 0.274 ± 0.017 and 0.305 ± 0.015 for starvation resistance in the female and male lines, respectively (Table 4). Compared with GBLUP with S_eQTL_A, GFBLUP resulted in lower prediction accuracy, except for the startle response using P-value cutoffs less than 0.001 in the male lines (Table 4). Furthermore, by using S_eQTL_A, the trends of the accuracy and bias of genomic prediction were different for the startle response and starvation resistance. For the startle response, by using S_eQTL_A with the optimal strategy, the best accuracy values were represented by increases of 19.12% and 21.55% for GBLUP and 10.78% and 19.89% for GFBULP in the female and male lines, respectively, compared to GBLUP with the complete WGS data (Table 4). However, the biases of genomic prediction with the optimal preselection SNPs were larger than those of the complete WGS data (Table S5). For starvation resistance, lower accuracy and similar biases of genomic prediction were found in the female and male lines, respectively (Table 4 and Table S5). In addition, when the number of SNPs was sufficiently large, the increased number of SNPs decreased the accuracy of GBLUP (Table 4 and Table S6). For example, the number of SNPs increased from 1,038,728 to 2,023,905, the accuracy of GBLUP decreased from 0.241 to 0.220 for Startle Response in the female lines.
Table 4 Prediction accuracies using SNPs preselection based on the results of eQTL mapping of all genes (S_eQTL_A)
SNPs preselection based on the eQTL mapping results of significant genes (S_eQTL_S) with different P-value cutoffs for genomic prediction
The accuracy of genomic prediction for the startle response and starvation resistance using S_eQTL_S with different P-value cutoffs is shown in Figs. 1 and 2, respectively. For the startle response, when we used P-value cutoffs less than 0.05 or 0.001 to select the significant genes, there existed an appropriate P-value cutoff to preselect eQTLs to improve the prediction accuracy of GBLUP and GFBLUP, compared with using GBLUP with the complete WGS data, except when performing GFBLUP on the female lines (Fig. 1). The best accuracy values were 0.258 ± 0.019 and 0.237 ± 0.019 for GBLUP and 0.265 ± 0.018 and 0.245 ± 0.020 for GFBLUP in the female and male lines, respectively (Fig. 1). Compared with the GBLUP using complete WGS data, the accuracy values represented increases of 24.04% and 30.94% for GBLUP and 27.40% and 35.36% for GFBULP in the female and male lines, respectively (Fig. 1). Furthermore, using SNPs preselected with the optimal strategy, the bias of GBLUP was 0.916 ± 0.080 and 0.851 ± 0.079, which are similar to the bias of GBLUP with the complete WGS data in the female (1.113 ± 0.140) and male lines (1.223 ± 0.177), but larger biases of GFBLUP were found in the female (0.415 ± 0.099) and male (0.324 ± 0.096) lines (Table S7). However, when we used P-value cutoffs less than 0.0001 or 0.00001 to select the significant genes, we achieved lower accuracy than when using the complete WGS data for both GBLUP and GFBLUP, no matter what P-value cutoff was used to preselect eQTLs. For starvation resistance, no matter what P-value cutoff was used to preselect significant genes from TWAS results, there always existed an appropriate P-value cutoff to preselect eQTLs to improve the accuracy of GBLUP and GFBLUP, compared with GBLUP using complete WGS data (Fig. 2). The best accuracy values were 0.437 ± 0.015 and 0.427 ± 0.015 for GBLUP and 0.419 ± 0.016 and 0.390 ± 0.014 for GFBLUP (Fig. 2). Compared to GBLUP with the complete WGS data, the accuracy values represented increases of 60.66% and 39.09% for GBLUP and 54.04% and 27.04% for GFBULP in the female and male lines, respectively (Fig. 2). Furthermore, by using SNPs preselected with the optimal strategy, the biases of genomic prediction were 0.897 ± 0.064 and 1.217 ± 0.061 for GBLUP and 1.122 ± 0.060 and 1.106 ± 0.062 for GFBLUP in the female and male lines, respectively; these values were similar to or smaller than the biases of GBLUP with the complete WGS data (1.137 ± 0.078 and 1.153 ± 0.065 in the female and male lines, respectively) (Table S6). In addition, the number of SNPs preselected from the WGS data based on the results of the eQTL mapping of significant genes from TWAS are shown in Table S8.
Prediction accuracies of the startle response using S_eQTL_S strategy with different P-value cutoffs. S_eQTL_S represents SNPs preselected from WGS data based on the results of the eQTL mapping of significant genes. The Y axis represents the Pearson correlation between the predicted genetic values and the phenotypic values for each trait in the validation sets. Both the X axis and the different colors of box plots represent the SNP datasets preselected from whole genome sequencing data using different P-value cutoffs based on the results of the eQTL mapping of significant genes from a transcriptome-wide association study (TWAS). GBLUP-Female and GBLUP-Male refer to performing genomic best linear unbiased prediction (GBLUP) on the female and male lines. GFBLUP-Female and GFBLUP-Male refer to performing genomic feature best linear unbiased prediction (GFBLUP) on the female and male lines. TWAS (P < cutoffs) refers to using the P-value cutoffs to preselect significant genes from TWAS. Black lines indicate the trend of the average accuracy in different scenarios
Prediction accuracies of the starvation resistance using S_eQTL_S strategy with different P-value cutoffs. S_eQTL_S represents SNPs preselected from WGS data based on the results of the eQTL mapping of significant genes. The Y axis represents the Pearson correlation between the predicted genetic values and the phenotypic values for each trait in the validation sets. Both the X axis and the different colors of box plots represent the SNP datasets preselected from whole genome sequencing data using different P-value cutoffs based on the results of the eQTL mapping of significant genes from a transcriptome-wide association study (TWAS). GBLUP-Female and GBLUP-Male refer to performing genomic best linear unbiased prediction (GBLUP) on the female and male lines. GFBLUP-Female and GFBLUP-Male refer to performing genomic feature best linear unbiased prediction (GFBLUP) on the female and male lines. TWAS (P < cutoffs) refers to using the P-value cutoffs to preselect significant genes from TWAS. Black lines indicate the trend of the average accuracy in different scenarios
In the present study, we determined the impact of different SNP preselection strategies on prediction accuracy using WGS and gene expression level data. To the best of our knowledge, this is the first time that gene expression level data of whole population were used to preselect feature SNPs to improve the accuracy of genomic prediction. Overall, using the SNPs preselected from WGS data based on gene expression data results in greater accuracy and a smaller bias of genomic prediction for the startle response and starvation resistance in Drosophila. Especially in using the SNPs preselected from the eQTL mapping of significant genes, the best accuracy values represented increases of 60.66% and 39.09% for the starvation resistance and 27.40% and 35.36% for startle response in the female and male lines, respectively, compared with GBLUP using the complete WGS data. The new knowledge gained from this study will help scholars enrich the use of omics data to improve the power of genomic prediction.
Total genomic heritability and prediction accuracy
Before performing genomic prediction, the heritability estimates of analyzed traits were estimated in the male and female lines. We found that the analyzed traits had high heritability, especially for starvation resistance, which almost explains the whole phenotypic variability in both the female and male lines (Table 1). These results are similar to those of a previous study [25] but higher than the results in [26]. This may be due to the quality control of the SNPs, the number of lines, and the line means for phenotypes in the present study, which are the same as those used in [25] and different from those in [16]. The high heritability of the analyzed traits showed that most loci that affect the traits have additive gene actions or contributions from non-additive gene actions at many loci. If additive gene action contributed to high heritability, high heritability would easily achieve a high prediction accuracy [38]. However, in this study, the high heritability of traits did not result in high prediction accuracy. Using WGS data, the accuracy values of GBLUP were 0.208 ± 0.020 (0.181 ± 0.022) for the startle response and 0.272 ± 0.017 (0.307 ± 0.015) for starvation resistance in the female (male) lines (Table 2). One possible reason for this result may be the small size of the reference population for genomic predictions. The other possible reason is that non-additive gene actions might contribute to the high estimated additive genetic variation components [39]. A previous study found that epistasis dominates the genetic architecture of Drosophila's quantitative traits [40]. Therefore, the high heritability of the analyzed traits was most likely the result of non-additive gene actions. In addition, the accuracy values of GBLUP in the present study were different than those in [5, 17] and similar to those in [25]. This difference may be due to the quality control of SNPs, fixed effects, the cross-validation procedure, or the size of the reference population.
Genomic feature BLUP model for genomic prediction
GFBLUP is an expansion model for traditional GBLUP that separates the total genomic components into two random genetic components using prior biological knowledge [36]. If a genomic feature contains more causal variants, GFBLUP always has a greater accuracy by adding different weights for the genomic features in the model according to the estimated variance components [17, 36]. Similar results were also found in this study (Fig. 1 and Fig. 2). Furthermore, the accuracy of GFBLUP was influenced by the composition's genomic features. If the proportion of QTNs in preselected genomic feature markers was very few (or even no), the accuracy of GFBLUP will decrease due to excessive consideration on spurious genomic features [41]. Similar results were also found in this study (Table 2). If the proportion of QTNs in preselected genomic feature markers was large, the GFBLUP further increases its prediction accuracy compared to GBLUP with genomic features only or the complete WGS data [17, 36]. For example, when P-values of TWAS and eQTL mapping less than 1e-05 and 0.001 were used to preselect 3,500 and 5,377 SNPs in female and male lines as the genomic feature, the accuracy values of GBLUP with the genomic feature were 0.418 and 0.353 for starvation resistance in the female and male lines, respectively; these values are lower than the accuracy of GFBLUP (0.419 and 0.381 for the female and male lines) (Fig. 2). However, if the proportion of QTNs in preselected genomic feature markers was small, GFBLUP resulted in a lower accuracy compared to GBLUP with genomic features only. For example, when the best parameter (the P-values of the TWAS and eQTL mapping were less than 1e-05 and 0.05) were used to preselected 177,035 and 227,569 SNPs in female and male lines as the genomic feature, the accuracy values of GBLUP with the genomic feature were 0.437 and 0.414 for starvation resistance in the female and male lines, respectively, which were higher than the accuracy value of GFBLUP (0.355 and 0.369 for the female and male lines) (Fig. 2). Therefore, the strength of GFBLUP is dependent on the preselection strategy for genomic features.
SNP preselection strategies influencing prediction accuracy
Performing genomic predictions with prior biological knowledge can improve the predictive ability for complex traits [17, 42, 43]. In this study, using the association analysis method, four strategies were proposed to preselect SNPs from WGS data for genomic prediction. We found that using S_GWAS did not improve the prediction accuracy values, especially for P-value cutoffs less than 0.001 (Table 2). Similar results were also indicated in previous studies using SNPs preselected from GWAS [8, 11]. The main reason for this result is that overfitting decreases the prediction accuracy. Overfitting means that a small proportion of variants captured a large proportion of variant components in the prediction model (Table S9 and Table S10). In addition, a smaller number of SNPs were preselected based on the P-value of GWAS (Table S2), which is similar to the results of a previous study, which showed that the accuracy of GBLUP decreased with the number of SNPs [5].
Moreover, using S_TWAS with different P-value cutoffs to perform genomic prediction resulted in lower prediction accuracy values compared to GBLUP with the complete WGS data (Table 3). However, compared with S_GWAS, there are no overfitting problems in the prediction model using S_TWAS (Table S11 and Table S12). The main factor for the decrease in prediction accuracy is that very few causal variants were detected using the genome position of the significant genes from TWAS (Table S4), as the gene expression level is not only affected by the variants near the regions of this gene (cis-eQTL) but also by the other SNPs in the genome (trans-eQTL) [30]. This phenomenon was confirmed by the greater accuracy values obtained using the SNPs preselected from the eQTL mapping of significant genes (Fig. 1 and Fig. 2).
Furthermore, when using S_eQTL_A with different P-value cutoffs to perform genomic prediction, only the startle response produced greater accuracy values compared to GBLUP with the complete WGS data. This is most likely because extreme noise was avoided using eQTL mapping to preselect the SNPs for genomic prediction. Because the expression of numerous genes was found in Drosophila [30], combining the significant eQTLs of each gene together almost covered the whole genome (Table S6).
Finally, we combined the strength of TWAS and eQTL mapping by using S_eQTL_S to perform genomic prediction and obtained a higher accuracy and smaller bias of genomic prediction (Fig. 1, Fig. 2 and Table S7), as the link between genomic variation and organismal phenotypes could only be determined by TWAS and eQTL mapping using gene expression data [21]. Briefly, the significant genes from TWAS in the training population represented the main gene expression level directly associated with the traits, and eQTL mapping of the whole population determined the significant SNPs associated with the gene expression level. In addition, combining the analyses of genomic variation with those of transcriptional variation and organismal phenotype variation allowed us to determine the gene networks associated with complex traits [30] so that the gene–gene interactions (epistasis) associated with complex traits could be captured. Overall, using genomic features preselected from multi-omics data is a feasible strategy to improve the power of genomic prediction.
Challenges for integrating transcriptomic data into genomic predictions
Both this study and several previous studies have indicated that integrating transcriptomic data into genomic prediction is a feasible method to improve the power of genomic prediction [21, 24, 25]. However, using transcriptomic data for genomic prediction in animal and plant breeding remains challenging, because it's too expensive to perform RNA sequencing for thousands of individuals in routine implementation, especially in practical breeding. Furthermore, unlike SNP, the level of gene expression is tissue-specific and time-dependent. Hence, the RNA must be extracted from the tissue associated with the trait of interest during the correct periods. However, this is very difficult to achieve in practice. In this study, RNA was extracted from whole flies, which ignored the tissue-specific and time-dependent effect such that the gene expression levels represented the average across all tissues [30]. It is important to balance the costs and benefits of using transcriptomic information when integrating transcriptomic data into genomic predictions for practical implementations.
The WGS data were downloaded from the Drosophila Genetic Reference Panel (DGRP) (http://dgrp.gnets.ncsu.edu/). The mean quantitative trait values and gene expression levels were taken from a previous study [30]. The gene expression data can be found in GEO (accession GSE117850).
DGRP:
Drosophila Genetic Reference Panel
eQTL:
Expression quantitative trait locus
Standardized relatedness matrix
GBLUP:
Genomic best linear unbiased prediction
GFBLUP:
Genomic feature best linear unbiased prediction
GS:
GWAS:
LD:
MAF:
Minor allele frequency
Omics:
Multiple genome-level
QTL:
Quantitative trait locus
REML:
Restricted maximum likelihood
SNP:
Single nucleotide polymorphism
TWAS:
Transcriptome-wide association study
WGS:
Meuwissen TH, Hayes BJ, Goddard ME. Prediction of Total genetic value using genome-wide dense marker maps. Genetics. 2001;157(4):1819–29.
Garcia-Ruiz A, Cole JB, VanRaden PM, Wiggans GR, Ruiz-Lopez FJ, Van Tassell CP. Changes in genetic selection differentials and generation intervals in US Holstein dairy cattle as a result of genomic selection. Proc Natl Acad Sci U S A. 2016;113(28):E3995–4004.
Meuwissen TH, Goddard ME. Accurate prediction of genetic values for complex traits by whole-genome resequencing. Genetics. 2010;185(2):623–31.
Iheshiulor OO, Woolliams JA, Yu X, Wellmann R, Meuwissen TH. Within- and across-breed genomic prediction using whole-genome sequence and single nucleotide polymorphism panels. Genet Sel Evol. 2016;48(1):15.
Ober U, Ayroles JF, Stone EA, Richards S, Zhu D, Gibbs RA, et al. Using whole-genome sequence data to predict quantitative trait phenotypes in Drosophila melanogaster. PLoS Genet. 2012;8(5):e1002685.
van Binsbergen R, Calus MP, Bink MC, van Eeuwijk FA, Schrooten C, Veerkamp RF. Genomic prediction using imputed whole-genome sequence data in Holstein Friesian cattle. Genet Sel Evol. 2015;47:71.
Zhang C, Kemp RA, Stothard P, Wang Z, Boddicker N, Krivushin K, et al. Genomic evaluation of feed efficiency component traits in Duroc pigs using 80K, 650K and whole-genome sequence variants. Genet Sel Evol. 2018;50(1):14.
Ye S, Gao N, Zheng R, Chen Z, Teng J, Yuan X, et al. Strategies for obtaining and pruning imputed whole-genome sequence data for genomic prediction. Front Genet. 2019;10:673.
Raymond B, Bouwman AC, Schrooten C, Houwing-Duistermaat J, Veerkamp RF. Utility of whole-genome sequence data for across-breed genomic prediction. Genet Sel Evol. 2018;50(1):27.
Zhang Z, Ober U, Erbe M, Zhang H, Gao N, He J, et al. Improving the accuracy of whole genome prediction for complex traits using the results of genome wide association studies. PLoS One. 2014;9(3):e93017.
Veerkamp RF, Bouwman AC, Schrooten C, Calus MP. Genomic prediction using preselected DNA variants from a GWAS with whole-genome sequence data in Holstein-Friesian cattle. Genet Sel Evol. 2016;48(1):95.
Song H, Ye S, Jiang Y, Zhang Z, Zhang Q, Ding X. Using imputation-based whole-genome sequencing data to improve the accuracy of genomic prediction for combined populations in pigs. Genet Sel Evol. 2019;51(1):58.
Kemper KE, Reich CM, Bowman PJ, Vander Jagt CJ, Chamberlain AJ, Mason BA, et al. Improved precision of QTL mapping using a nonlinear Bayesian method in a multi-breed population leads to greater accuracy of across-breed genomic predictions. Genet Sel Evol. 2015;47(1):29.
Ye S, Song H, Ding X, Zhang Z, Li J. Pre-selecting markers based on fixation index scores improved the power of genomic evaluations in a combined Yorkshire pig population. Animal. 2020;14(8):1555–64.
Heidaritabar M, Calus MP, Megens HJ, Vereijken A, Groenen MA, Bastiaansen JW. Accuracy of genomic prediction using imputed whole-genome sequence data in white layers. J Anim Breed Genet. 2016;133(3):167–79.
Gao N, Martini JWR, Zhang Z, Yuan XL, Zhang H, Simianer H, et al. Incorporating gene annotation into genomic prediction of complex phenotypes. Genetics. 2017;207(2):489–501.
Edwards SM, Sorensen IF, Sarup P, Mackay TFC, Sorensen P. Genomic prediction for quantitative traits is improved by mapping variants to gene ontology categories in Drosophila melanogaster. Genetics. 2016;203(4):1871–83.
Abdollahi-Arpanahi R, Morota G, Peñagaricano F. Predicting bull fertility using genomic data and biological information. J Dairy Sci. 2017;100(12):9656.
Vazquez AI, Veturi Y, Behring M, Shrestha S, Kirst M, Resende MF Jr, et al. Increased proportion of variance explained and prediction accuracy of survival of breast cancer patients with use of whole-genome multiomic profiles. Genetics. 2016;203(3):1425–38.
Dimitrakopoulos L, Prassas I, Diamandis EP, Charames GS. Onco-proteogenomics: multi-omics level data integration for accurate phenotype prediction. Crit Rev Clin Lab Sci. 2017;54(6):414–32.
Azodi CB, Pardo J, VanBuren R, de Los CG, Shiu SH. Transcriptome-based prediction of complex traits in maize. Plant Cell. 2020;32(1):139–51.
Xu Y, Xu C, Xu S. Prediction and association mapping of agronomic traits in maize using multiple omic data. Heredity (Edinb). 2017;119(3):174–84.
Wang S, Wei J, Li R, Qu H, Chater JM, Ma R, et al. Identification of optimal prediction models using multi-omic data for selecting hybrid rice. Heredity (Edinb). 2019;123(3):395–406.
Hu X, Xie W, Wu C, Xu S. A directed learning strategy integrating multiple omic data improves genomic prediction. Plant Biotechnol J. 2019;17(10):2011–20.
Morgante F, Huang W, Sørensen P, Maltecca C, Mackay TFC. Leveraging multiple layers of data to predict Drosophila complex traits. bioRxiv. 2019. https://doi.org/10.1101/824896.
Li Z, Gao N, Martini JWR, Simianer H. Integrating gene expression data into genomic prediction. Front Genet. 2019;10:126.
Guo Z, Magwire MM, Basten CJ, Xu Z, Wang D. Evaluation of the utility of gene expression and metabolic information for genomic prediction in maize. Theor Appl Genet. 2016;129(12):2413–27.
Mackay TFC, Richards S, Stone EA, Barbadilla A, Ayroles JF, Zhu DH, et al. The Drosophila melanogaster genetic reference panel. Nature. 2012;482(7384):173–8.
Huang W, Massouras A, Inoue Y, Peiffer J, Ramia M, Tarone AM, et al. Natural variation in genome architecture among 205 Drosophila melanogaster genetic reference panel lines. Genome Res. 2014;24(7):1193–208.
Everett LJ, Huang W, Zhou S, Carbone MA, Lyman RF, Arya GH, et al. Gene expression networks in the Drosophila genetic reference panel. Genome Res. 2020;30(3):485–96.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75.
Browning B, Browning S. Genotype imputation with millions of reference samples. Am J Hum Genet. 2016;98(1):116–26.
Speed D, Balding DJ. SumHer better estimates the SNP heritability of complex traits from summary statistics. Nat Genet. 2019;51(2):277–84.
Zhou X, Stephens M. Efficient multivariate linear mixed model algorithms for genome-wide association studies. Nat Methods. 2014;11(4):407–9.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91(11):4414–23.
Sarup P, Jensen J, Ostersen T, Henryon M, Sorensen P. Increased prediction accuracy using a genomic feature model including prior information on quantitative trait locus regions in purebred Danish Duroc pigs. BMC Genet. 2016;17:11.
Henderson CR. Applications of linear models in animal breeding: Guelph : University of Guelph; 1984.
Daetwyler HD, Villanueva B, Woolliams JA. Accuracy of predicting the genetic risk of disease using a genome-wide approach. PLoS One. 2008;3(10):e3395.
Maki-Tanila A, Hill WG. Influence of gene interaction on complex trait variation with multilocus models. Genetics. 2014;198(1):355–67.
Huang W, Richards S, Carbone MA, Zhu D, Anholt RR, Ayroles JF, et al. Epistasis dominates the genetic architecture of Drosophila quantitative traits. Proc Natl Acad Sci U S A. 2012;109(39):15553–9.
Fang L, Sahana G, Ma P, Su G, Yu Y, Zhang S, et al. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection. Genet Sel Evol. 2017;49(1):44.
de Los CG, Vazquez AI, Fernando R, Klimentidis YC, Sorensen D. Prediction of complex human traits using the genomic best linear unbiased predictor. PLoS Genet. 2013;9(7):e1003608.
MacLeod IM, Bowman PJ, Vander Jagt CJ, Haile-Mariam M, Kemper KE, Chamberlain AJ, et al. Exploiting biological priors and sequence variants enhances QTL discovery and genomic prediction of complex traits. BMC Genomics. 2016;17:144.
The authors are grateful to Prof. Trudy F. C. Mackay et al., who shared the resources of DGRP lines in public dataset. We also very much appreciate the feedback from the editor and two anonymous reviewers, whose useful suggestions and thoughtful comments helped us to improve the manuscript considerably. At last, the authors are grateful to Xiaofeng Zhou, Yingting He, Shuqi Diao, and Jinyan Teng for checking manuscripts and correcting writing mistakes.
This work was supported by the National Natural Science Foundation of China (31772556), the Local Innovative and Research Teams Project of Guangdong Province (2019BT02N630), the grants from the earmarked fund for China Agriculture Research System (CARS-35), and the Science and Technology Innovation Strategy projects of Guangdong Province (Grant No. 2018B020203002).
Guangdong Provincial Key Lab of Agro-Animal Genomics and Molecular Breeding, National Engineering Research Centre for Breeding Swine Industry, College of Animal Science, South China Agricultural University, Guangzhou, Guangdong, China
Shaopan Ye, Jiaqi Li & Zhe Zhang
Shaopan Ye
Jiaqi Li
SPY, ZZ, and JQL conceived the study and designed the project and helped draft. SPY performed genomic prediction and analyzed the accuracy. All authors read and approved the manuscript.
Correspondence to Zhe Zhang.
Additional file 1: Table S1
. The bias values of genomic prediction using preselected SNPs based on GWAS results (S_GWAS). Table S2. The number of preselected SNPs based on the GWAS results (S_GWAS). Table S3. The bias values of genomic prediction using preselected SNPs based on TWAS results (S_TWAS). Table S4. The number of preselected SNPs based on the TWAS results (S_TWAS). Table S5. The bias values of genomic prediction using preselected SNPs based on the results of eQTL mapping of all genes (S_eQTL_A). S6. The number of the preselected SNPs based on the results of eQTL mapping of all genes (S_eQTL_A). Table S7. The bias values of genomic prediction using preselected SNPs based on the results of eQTL mapping of significant genes (S_eQTL_S). Table S8. The number of preselected SNPs based on the results of eQTL mapping of significant genes (S_eQTL_S). Table S9. The variance component of GBLUP using preselected SNPs based on the GWAS results (S_GWAS). Table S10. The variance component of GFBLUP using preselected SNPs based on the GWAS results. Table S11. The variance component of GBLUP using preselected SNPs based on the TWAS results (S_TWAS). Table S12. The variance component of GFBLUP using preselected SNPs based on the TWAS results (S_TWAS).
Ye, S., Li, J. & Zhang, Z. Multi-omics-data-assisted genomic feature markers preselection improves the accuracy of genomic prediction. J Animal Sci Biotechnol 11, 109 (2020). https://doi.org/10.1186/s40104-020-00515-5
Genomic prediction
Multi-omics data
SNP preselection | CommonCrawl |
suleimaan
Terms in this set (126)
156) What allows separate systems to communicate directly with each other, eliminating the need for manual entry into multiple systems? A) Integration B) Intelligence C) Data interchange D) Demand plan
157) What takes information entered into a given system and sends it automatically to all downstream systems and processes? A) Forward integration B) Forward data interchange C) Backward integration D) Backward data interchange
158) What takes information entered into a given system and sends it automatically to all upstream systems and processes? A) Forward integration B) Forward data interchange C) Backward integration D) Backward data interchange
159) What provides enterprisewide support and data access for a firm's operations and business processes? A) Enterprise systems B) Enterprise application integration C) Middleware D) Enterprise application integration middleware
160) What connects the plans, methods, and tools aimed at integrating separate enterprise systems? A) Enterprise systems B) Enterprise application integration C) Middleware D) Enterprise application integration middleware
161) What are several different types of software that sit between and provide connectivity for two or more software applications? A) Enterprise systems B) Enterprise application integration C) Middleware D) Enterprise application integration middleware
162) What takes a new approach to middleware by packaging commonly used applications together, reducing the time needed to integrate applications from multiple vendors? A) Enterprise systems B) Enterprise application integration C) Middleware D) Enterprise application integration middleware
163) What is an application integration? A) The integration of a company's existing management information systems. B) The integration of data from multiple sources, which provides a unified view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Sends information entered into a given system automatically to all upstream systems and processes.
. Answer: A
164) What is a data integration? A) The integration of a company's existing management information systems. B) The integration of data from multiple sources, which provides a unified view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Sends information entered into a given system automatically to all upstream systems and processes.
165) What is a forward integration? A) The integration of a company's existing management information systems. B) The integration of data from multiple sources, which provides a unified view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Sends information entered into a given system automatically to all upstream systems and processes.
166) What is a backward integration? A) The integration of a company's existing management information systems. B) The integration of data from multiple sources, which provides a unified view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Sends information entered into a given system automatically to all upstream systems and processes,
167) What are enterprise systems? A) The integration of a company's existing management information systems. B) The integration of data from multiple sources, which provides a unificd view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Enterprisewide support and data access for a firm's operations and business processes.
168) What is enterprise application integration (EAI)? A) Connects the plans, methods, and tools aimed at integrating separate enterprise system. B) The integration of data from multiple sources, which provides a unified view of all data. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Enterprisewide support and data access for a firm's operations and business processes.
169) How are integrations achieved? A) Connects the plans, methods, and tools aimed at integrating separate enterprise system. B) Integrations are achieved using middleware -several types of software that sit between and provide connectivity for two or more software applications. C) Sends information entered into a given system automatically to all downstream systems and processes. D) Enterprisewide support and data access for a firm's operations and business processes.
170) What is middleware? A) The use of the Internet to provide customers with the ability to gain personalized information by querying corporate databases and their information sources. B) The integration of data from multiple sources, which provides a unified view of all data. C) Translates information between disparate systems. D) Packages commonly used applications together, reducing the time needed to integrate applications from multiple vendors.
171) What is enterprise application integration (EAI) middleware? A) The use of the Internet to provide customers with t he ability to gain personalized information by querying corporate databases and their information sources. B) The integration of data from multiple sources, which provides a unified view of all data. C) Translates information between disparate systems. D) Packages commonly used applications together, reducing the time needed integrate applications from multiple vendors.
172) What is eintegration? A) The use of the Internet to provide customers with the ability to gain personalized information by querying corporate databases and their information sources. B) The integration of data from multiple sources, which provides a unified view of all data. C) Translates information between disparate systems. D) Packages commonly used applications together, reducing the time needed to integrate applications from multiple vendors.
173) Which of the following is not an example of a primary enterprise system? A) Supply chain management B) Customer relationship management C) Enterprise revenue planning D) Enterprise resource planning
174) What is the use of the Internet to provide customers with the ability to gain personalized information by querying corporate databases and their information sources? A) Eintegration B) Application integration C) Data integration D) Forward integration
Answer : A
175) What is the integration of a company's existing management information systems? A) Backward integration B) Application integration C) Data integration D) Forward integration
176) What is the integration of data from multiple sources, which provides a unified view of all data? A) Backward integration B) Application integration C) Data integration D) Forward integration
177) What takes information entered into a given system and sends it automatically to all downstream systems and processes? A) Backward integration B) Application integration C) Data integration D) Forward integration
178) What takes information entered into a given system and sends it automatically to all upstream systems and processes? A) Backward integration B) Application integration C) Data integration D) Forward integration
179) What is data integration? A) The integration of data from multiple sources, which provides a unified view of all data. B) Takes information entered into a given system and sends it automatically to all downstream systems and processes. C) Takes information entered into a given system and sends it automatically to all upstream systems and processes D) The integration of a company's existing management information systems.
180) What is a forward integration? A) The integration of data from multiple sources, which provides a unified view of all data. B) Takes information entered into a given system and sends it automatically to all downstream systems and processes. C) Takes information entered into a given system and sends it automatically to all upstream systems and processes D) The integration of a company's existing management information systems.
181) What is a backward integration? A) The integration of data from multiple sources, which provides a unified view of all data. B) Takes information entered into a given system and sends it automatically to all downstream systems and processes. C) Takes information entered into a given system and sends it automatically to all upstream systems and processes D) The integration of a company's existing management information systems.
182) What is application integration? A) The integration of data from multiple sources, which provides a unified view of all data. B) Takes information entered into a given system and sends it automatically to all downstream systems and processes. C) Takes information entered into a given system and sends it automatically to all upstream systems and processes D) The integration of a company's existing management information systems.
183) What is Supply chain management (SCM)? A) The management of information flows between and among activities in a supply chain to maximize total supply chain effectiveness and corporate profitability. B) Takes information entered into a given system and sends it automatically to all downstream systems and processes. C) A means of managing all aspects of a customer's relationship with an organization to increase customer loyalty and retention and an organization's profitability. D) Connects the plans, methods, and tools aimed at integrating separate enterprise system.
184) What is enterprise resource planning (ERP)? A) The management of information flows between and among activities in a supply chain to maximize total supply chain effectiveness and corporate profitability. B) Integrates all departments and functions throughout an organization into a single IT system (or integrated set of IT systems) so employees can make decisions by viewing enterprisewide information about all business operations. C) A means of managing all aspects of a customer's relationship with an organization to increase customer loyalty and retention and an organization's profitability. D) Connects the plans, methods, and tools aimed at integrating separate enterprise system.
185) What is customer relationship management (CRM)? A) The management of information flows between and among activities in a supply chain to maximize total supply chain effectiveness and corporate profitability. B) Integrates all departments and functions throughout an organization into a single IT system (or integrated set of IT systems) so employees can make decisions by viewing enterprisewide information about all business operations. C) A means of managing all aspects of a customer's relationship with an organization to increase customer loyalty and retention and an organization's profitability. D) Connects the plans, methods, and tools aimed at integrating separate enterprise system.
186) What is enterprise application integration (EAI)? A) The management of information flows between and among activities in a supply chain to maximize total supply chain effectiveness and corporate profitability. B) Integrates all departments and functions throughout an organization into a single IT system (or integrated set of IT systems) so employees can make decisions by viewing enterprisewide information about all business operations. C) A means of managing all aspects of a customer's relationship with an organization to increase customer loyalty and retention and an organization's profitability. D) Connects the plans, methods, and tools aimed at integrating separate enterprise system.
187) In which of the five basic supply chain activities do you prepare to manage all resources required to meet demand? A) Plan B) Source C) Deliver D) Return
188) In which of the five basic supply chain activities do you build relationships with suppliers to procure raw materials? A) Plan B) Source C) Deliver D) Return
189) In which of the five basic supply chain activities do you manufacture products and create production schedules? A) Plan B) Source C) Deliver D) Make
190) In which of the five basic supply chain activities do you plan for the transportation of goods to customers? A) Plan B) Source C) Deliver D) Return
191) In which of the five basic supply chain activities do you support customers and product returns? A) Plan B) Source C) Deliver D) Return
192) Where would you find the customers' customer in a typical supply chain? A) Upstream B) Downstream C) In the middle D) Not on the supply chain
193) Where would you find the suppliers' suppler in a typical supply chain? A) Upstream B) Downstream C) In the middle D) Not on the supply chain
194) Where would you find the manufacturer and distributor in a typical supply chain? A) Upstream B) Downstream C) In the middle D) Not on the supply chain
195) Walmart and Procter & Gamble (P&G) implemented a tremendously successful SCM system. The system linked Walmart's _________ centers directly to P&G's _______ centers. A) Manufacturing, distribution B) Distribution, manufacturing C) Stores, distribution D) Distribution, stores
196) What can effective and efficient supply chain management systems enable an organization to accomplish? A) Increase the power of its buyers B) Increase its supplier power C) Increase switching costs to increase the threat of substitute products or services D) All of the above
197) Which of the following is not one of the five basic components of supply chain management? A) Plan B) Source C) Cost D) Deliver
198) Which of the following is not one of the five basic components of supply chain management? A) Plan B) Source C) Analyze D) Deliver
Answer : C
199) Which of the following is not one of the five basic components of supply chain management? A) Plan B) Source C) Sale D) Deliver
200) What is it called when distorted product-demand information ripples from one partner to the next throughout the supply chain? A) Bullwhip effect B) Demand planning systems C) Supply chain planning systems D) Supply chain execution systems
201) Which of the below represents the bullwhip effect? A) Organizations know about employee events triggered downstream in the supply chain B) Customers receive distorted product demand information regarding sales information C) Distorted product-demand information ripples from one partner to the next throughout the supply chain D) The ability to view all areas up and down the supply chain
202) What is the ability to view all areas up and down the supply chain in real time? A) Bullwhip effect B) Demand planning software C) Supply chain visibility D) Supply chain execution software
203) Which of the below metrics represents an unfilled customer order for a product that is out of stock? A) Back order B) Inventory cycle time C) Customer order cycle time D) Inventory turnover
204) Which of the below metrics represents the time it takes to manufacture a product and deliver it to the retailer? A) Back order B) Inventory cyele time C) Customer order cycle time D) Inventory turnover
205) Which of the below metrics represents the agreed upon time between the purchase of a product and the delivery of the product? A) Back order B) Inventory cycle time C) Customer order cycle time D) Inventory turnover
206) Which of the below metrics represents the frequency of inventory replacement? A) Back order B) Inventory cycle time C) Customer order cycle time D) Inventory turnover
207) What is a back order? A) An unfilled customer order for a product that is out of stock. B) The time it takes to manufacture a product and deliver it to the retailer. C) The agreed upon time between the purchase of a product and the delivery of the product. D) The frequency of inventory replacement.
208) What is inventory cycle time? A) An unfilled customer order for a product that is out of stock. B) The time it takes to manufacture a product and deliver it to the retailer. C) The agreed upon time between the purchase of a product and the delivery of the product. D) The frequency of inventory replacement.
frequency Answer: B
209) What is customer order cycle time? A) An unfilled customer order for a product that is out of stock. B) The time it takes to manufacture a product and deliver it to the retailer. C) The agreed upon time between the purchase of a product and the delivery of the product. D) The frequency of inventory replacement.
210) What is inventory turnover? A) An unfilled customer order for a product that is out of stock. B) The time it takes to manufacture a product and deliver it to the retailer. C) The agreed upon time between the purchase of a product and the delivery of the product. D) The frequency of inventory replacement.
211) What is supply chain visibility? A) The ability to view all areas up and down the supply chain in real time. B) Uses advanced mathematical algorithms to improve the flow and efficiency of the supply chain while reducing inventory. C) Ensures supply chain cohesion by automating the different activities of the supply chain. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
212) What is a supply chain planning system? A) The ability to view all areas up and down the supply chain in real time. B) Uses advanced mathematical algorithms to improve the flow and efficiency of the supply chain while reducing inventory. C) Ensures supply chain cohesion by automating the different activities of the supply chain. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
213) What is a supply chain execution system? A) The ability to view all areas up and down the supply chain in real time. B) Uses advanced mathematical algorithms to improve the flow and efficiency of the supply chain while reducing inventory. C) Ensures supply chain cohesion by automating the different activities of the supply chain. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
214) What is the bullwhip effect? A) The ability to view all areas up and down the supply chain in real time. B) Uses advanced mathematical algorithms to improve the flow and efficiency of the supply chain while reducing inventory. C) Ensures supply chain cohesion by automating the different activities of the supply chain. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
215) Which of the following is one of the business areas of supply chain management? A) Logistics B) Procurement C) Materials Management D) All of the above
216) What is the purchasing of goods and services to meet the needs of the supply chain? A) Procurement B) Logistics C) Materials management D) Bullwhip effect
217) What includes the processes that control the distribution, maintenance, and replacement of materials and personnel to support the supply chain? A) Procurement B) Logistics C) Materials management D) Bullwhip effect
218) What includes activities that govern the flow of tangible, physical materials through the supply chain such as shipping, transport, distribution, and warehousing? A) Procurement B) Logistics C) Materials management D) Bullwhip effect
219) What is procurement? A) The purchasing of goods and services to meet the needs of the supply chain. B) Includes the processes that control the distribution, maintenance, and replacement of materials and personnel to support the supply chain. C) Includes activities that govern the flow of tangible, physical materials through the supply chain such as shipping, transport, distribution, and warehousing. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
220) What is logistics? A) The purchasing of goods and services to meet the needs of the supply chain. B) Includes the processes that control the distribution, maintenance, and replacement of materials and personnel to support the supply chain. C) Includes activities that goverm the flow of tangible, physical materials through the supply chain such as shipping, transport, distribution, and warehousing. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
221) What is materials management? A) The purchasing of goods and services to meet the needs of the supply chain. B) Includes the processes that control the distribution, maintenance, and replacement of materials and personnel to support the supply chain. C) Includes activities that govern the flow of tangible, physical materials through the supply chain such as shipping, transport, distribution, and warehousing. D) Occurs when distorted product-demand information ripples from one partner to the next throughout the supply chain.
222) What acquires raw materials and resources and distributes them to manufacturing as required? A) Inbound logistics B) Outbound logistics C) Logistics D) Cradle to grave
223) What distributes goods and services to customers? A) Inbound logistics B) Outbound logistics C) Logistics D) Cradle to grave
224) What includes the increasingly complex management of processes, information, and communication to take a product from cradle to grave? A) Inbound logistics B) Outbound logistics C) Logistics D) Cradle to grave
225) Which of the following questions can procurement help a company answer? A) What quantity of raw materials should we purchase to minimize spoilage? B) How can we guarantee that our raw materials meet production needs? C) At what price can we purchase materials to guarantee profitability? D) All of the above
226) Which of the following questions can logistics help a company answer? A) What is the quickest way to deliver products to our customers? B) What is the optimal way to place items in the warehouse for picking and packing? C) What is the optimal path to an item in the warchouse? D) All of the above
227) Which of the following questions can procurement help a company answer? A) What is the quickest way to deliver products to our customers? B) What is the optimal way to place items in the warehouse for picking and packing? C) What is the optimal path to an item in the warehouse? D) How can we guarantee that our raw materials meet production needs?
228) Which of the following questions can logistics help a company answer? A) What quantity of raw materials should we purchase to minimize spoilage? B) How can we guarantee that our raw materials meet production needs? C) At what price can we purchase materials to guarantee profitability? D) What path should the vehicles follow when delivering the goods?
229) Which of the following questions can materials management help a company answer? A) What are our current inventory levels? B) What items are running low in the warehouse? C) What items are at risk of spoiling in the warehouse? D) All of the above
230) Which of the following questions can materials management help a company answer? A) How do we dispose of spoiled items? B) What laws need to be followed for storing hazardous materials? C) Which items must be refrigerated when being stored and transported? D) All of the above
231) What is 3D Printing? A) A process that builds layer by layer in an additive process- a three-dimensional solid object from a digital model. B) Uses electronic tags and labels to identify objects wirelessly over short distances. C) Unmanned aircraft that can fly autonomously, or without a human. D) Focus on creating artificial intelligence devices that can move and react to sensory input.
232) What is RFID? A) A process that builds layer by layer in an additive process- a three-dimensional solid object from a digital model. B) Uses electronic tags and labels to identify objects wirelessly over short distances. C) Unmanned aircraft that can fly autonomously, or without a human. D) Focus on creating artificial intelligence devices that can move and react to sensory input.
233) What are drones? A) A process that builds layer by layer in an additive process- a three-dimensional solid object from a digital model. B) Uses electronic tags and labels to identify objects wirelessly over short distances. C) Unmanned aircraft that can fly autonomously, or without a human. D) Focus on creating artificial intelligence devices that can move and react to sensory input.
234) What are robotics? A) A process that builds layer by layer in an additive process- a three-dimensional solid object from a digital model. B) Uses electronic tags and labels to identify objects wirelessly over short distances. C) Unmanned aircraft that can fly autonomously, or without a human. D) Focus on creating artificial intelligence devices that can move and react to sensory input.
235) What is a process that builds-layer by layer in an additive process- a three-dimensional solid object from a digital model? A) 3D Printing B) Robotics C) Drones D) RFID
236) What focuses on creating artificial intelligence devices that can move and react to sensory input? A) 3D Printing B) Robotics C) Drones D) RFID
237) What are unmanned aircraft that can fly autonomously, or without a human? A) 3D Printing B) Robotics C) Drones D) RFID
238) What uses electronic tags and labels to identify objects wirelessly over short distances? A) 3D Printing B) Robotics C) Drones D) RFID
239) What is computer-aided design/computer-aided manufacturing (CAD/CAM)? A) Systems are used to create the digital designs and then manufacture the products. B) A cultural trend that places value on an individual's ability to be a creator of things as well as a consumer of things. C) A community center that provides technology, manufacturing equipment, and educational opportunities to the public that would otherwise be inaccessible or unaffordable. D) Promotes serialization or the ability to track individual items by using the unique serial number associated with each RFID tag.
240) What is the maker movement? A) Systems are used to create the digital designs and then manufacture the products. B) A cultural trend that places value on an individual's ability to be a creator of things as well as a consumer of things. C) A community center that provides technology, manufacturing equipment, and educational opportunities to the public that would otherwise be inaccessible or unaffordable. D) Promotes serialization or the ability to track individual items by using the unique serial number associated with each RFID tag
241) What is a makerspace? A) Systems are used to create the digital designs and then manufacture the products. B) A cultural trend that places value on an individual's ability to be a creator of things as well as a consumer of things. C) A community center that provides technology, manufacturing equipment, and educational opportunities to the public that would otherwise be inaccessible or unaffordable. D) Promotes serialization or the ability to track individual items by using the unique serial number associated with each RFID tag.
242) What is an RFID's electronic product code? A) Systems are used to create the digital designs and then manufacture the products. B) A cultural trend that places value on an individual's ability to be a creator of things as well as a consumer of things. C) A community center that provides technology, manufacturing equipment, and educational opportunities to the public that would otherwise be inaccessible or unaffordable. D) Promotes serialization or the ability to track individual items by using the unique serial number associated with each RFID tag.
Answer: D Explanation:
243) What are systems are used to create the digital designs and then manufacture the products? A) Computer-aided design/computer-aided manufacturing B) Maker movement C) Makerspace D) RFID electronic product code
244) What is a cultural trend that places value on an individual's ability to be a creator of things as well as a consumer of things? A) Computer-aided design/computer-aided manufacturing B) Maker movement C) Makerspace D) RFID electronic product code
245) What is a community center that provides technology, manufacturing equipment, and educational opportunities to the public that would otherwise be inaccessible or unaffordable? A) Computer-aided design/computer-aided manufacturing B) Maker movement C) Makerspace D) RFID electronic product code
246) What promotes serialization or the ability to track individual items by using the unique serial number associated with each RFID tag? A) Computer-aided design/computer-aided manufacturing B) Maker movement C) Makerspace D) RFID electronic product code
247) What is supply chain event management (SCEM)? A) Enables an organization to react more quickly to resolve supply chain issues. B) Applies technology to the activities in the order life cycle from inquiry to sale. C) Allows an organization to reduce the cost and time required during the design process of a product. D) Helps organizations reduce their investment in inventory while improving customer satisfaction through product availability.
Answer: A Explanation:
248) What is selling chain management? A) Enables an organization to react more quickly to resolve supply chain issues. B) Applies technology to the activities in the order life cycle from inquiry to sale. C) Allows an organization to reduce the cost and time required during the design process of a product. D) Helps organizations reduce their investment in inventory while improving customer satisfaction through product availability.
249) What is collaborative engineering? A) Enables an organization to react more quickly to resolve supply chain issues. B) Applies technology to the activities in the order life cycle from inquiry to sale. C) Allows an organization to reduce the cost and time required during the design process of a product. D) Helps organizations reduce their investment in inventory while improving customer satisfaction through product availability.
250) What is collaborative demand planning? A) Enables an organization to react more quickly to resolve supply chain issues. B) Applies technology to the activities in the order life cycle from inquiry to sale. C) Allows an organization to reduce the cost and time required during the design process of a product. D) Helps organizations reduce their investment in inventory while improving customer satisfaction through product availability.
251) What enables an organization to react more quickly to resolve supply chain issues? A) Supply chain event management B) Selling chain management C) Collaborative engineering D) Collaborative demand planning
252) What applies technology to the activities in the order life cycle from inquiry to sale? A) Supply chain event management B) Selling chain management C) Collaborative engineering D) Collaborative demand planning
253) What allows an organization to reduce the cost and time required during the design process of a product? A) Supply chain event management B) Selling chain management C) Collaborative engineering D) Collaborative demand planning
Answer: C Explanation:
254) What helps organizations reduce their investment in inventory while improving customer satisfaction through product availability? A) Supply chain event management B) Selling chain management C) Collaborative engineering D) Collaborative demand planning
Answer : D
255) Which of the following is not a current CRM trend? A) Partner relationship management B) Supplier relationship management C) Employee relationship management D) Distributor relationship management
256) Which of the following is not a valid way that a CRM system can collect information? A) Accounting system B) Order fulfillment system C) Inventory system D) Customer's personal computer
257) What occurs when a website can know enough about a person's likes and dislikes that it can fashion offers that are more likely to appeal to that person? A) Operational CRM B) Analytical CRM C) Website personalization D) List generators CRM
258) Which of the following is not one of the three phases in the evolution of CRM? A) Reporting B) Analyzing C) Processing D) Predicting
259) What helps an organization identify its customers across applications? A) CRM reporting technologies B) CRM analyzing technologies C) CRM processing technologies D) CRM predicting technologies
260) What is an organization performing when it asks questions such as "why was customer revenue so high"? A) CRM reporting technologies B) CRM analyzing technologies C) CRM processing technologies D) CRM predicting technologies
261) What is an organization performing when it asks questions such as "which customers are at risk of leaving"? A) CRM reporting technologies B) CRM analyzing technologies C) CRM processing technologies D) CRM predicting technologies
262) Which question below represents a CRM reporting technology example? A) Why did sales not meet forecasts? B) What customers are at risk of leaving? C) What is the total revenue by customer? D) All of the above
263) Which question below represents a CRM analyzing technology question? A) Why did sales not meet forecasts? B) What customers are at risk of leaving? C) What is the total revenue by customer? D) All of the above
264) Which question below represents a CRM predicting technology question? A) Why did sales not meet forecasts? B) What customers are at risk of leaving? C) What is the total revenue by customer? D) All of the above
265) Which of the following operational CRM technologies does the sales department typically use? A) Campaign management, contact management, opportunity management B) Sales management, contact management, contact center C) Sales management, call scripting, opportunity management D) Sales management, contact management, opportunity management
266) Which of the following operational CRM technologies does the marketing department typically use? A) Contact center, web-based self-service, call scripting B) Contact center, cross-selling and up-selling, web-based self-service C) List generator, opportunity management, cross-selling and up-selling D) List generator, campaign management, cross-selling and up-selling
267) Which of the following operational CRM technologies does the customer service department typically use? A) Contact center, web-based self-service, call scripting B) Sales management, contact management, opportunity management C) List generator, opportunity management, cross-selling and up-selling D) List generator, campaign management, cross-selling and Up-selling
268) What compiles customer information from a variety of sources and segments the information for different marketing campaigns? A) Campaign management system B) Cross-selling C) Up-selling D) List generator
269) What guides users through marketing campaigns performing such tasks as campaign definition, planning, scheduling, segmentation, and success analysis? A) Campaign management system B) Cross-selling C) Up-selling D) List generator
270) What is McDonald's performing when it asks its customers if they would like to super-size their meals? A) Campaign management B) Cross-selling C) Up-selling D) Down-selling
271) Which of the following represents sales force automation? A) Helping an organization identify its customers across applications B) Selling additional products or services to a customer C) A system that automatically tracks all of the steps in the sales process D) Selling larger products or services to a customer
272) What automates each phase of the sales process, helping individual sales representatives coordinate and organize all of their accounts? A) Sales management CRM systems B) Contact management CRM systems C) Opportunity management CRM systems D) All of the above
273) What maintains customer contact information and identifies prospective customers for future sales? A) Sales management CRM system B) Contact management CRM system C) Opportunity management CRM system D) Sales force automation CRM system
274) What targets sales opportunities by finding new customers or companies for future sales? A) Sales management system B) Contact management system C) Opportunity management system D) Sales force automation system
275) Which of the following was one of the first CRM components built to address the issues that sales representatives were struggling with the overwhelming amount of customer account information they were required to maintain and track? A) Sales management system B) Contact management system C) Opportunity management system D) Sales force automation system
276) What is the primary difference between contact management and opportunity management? A) Contact management deals with new customers, opportunity management deals with existing customers B) Contact management deals with existing customers, opportunity management deals with existing customers C) Contact management deals with new customers, opportunity management deals with new customers D) Contact management deals with existing customers, opportunity management deals with new customers
277) Which of the following is where customer service representatives answer customer inquiries and respond to problems through a number of different customer touchpoints? A) Contact center B) Web-based self-service C) Call scripting D) Website personalization
278) What allows customers to use the web to find answers to their questions or solutions to their problems? A) Contact center B) Web-based self-service C) Call scripting D) Website personalization
279) What accesses organizational databases that track similar issues or questions and automatically generate the details to the CSR who can then relay them to the customer? A) Contact center B) Web-based self-service C) Call scripting D) Website personalization
280) What is automatic call distribution? A) Automatically dials outbound calls and when someone answers, the call is forwarded to an available agent B) Directs customers to use touch-tone phones or keywords to navigate or provide information C) A phone switch routes inbound calls to available agents D) All of the above
281) What is interactive voice response (IVR)? A) Automatically dials outbound calls and when someone answers, the call is forwarded to an available agent B) Directs customers to use touch-tone phones or keywords to navigate or provide information C) A phone switch routes inbound calls to available agents D) All of the above
9. State a typical specific impulse for a hydrogen fueled rocket engine and hydrogen fueled scramjet, both operating at Mach 10. Considering the definition of specific impulse, why is one higher than the other?
Which engine lathe component precisely sets the leadscrew and carriage positions in relation to the workpiece?
THE PURPOSE OF A STANDPIPE IN A RESERVIOR IS TO
When shipping an item by truck, what type of trailer affords a smoother-than-normal ride while in transit?
MINS 301 Ch. 8 Vocabulary
ejossey
hmsmith11
Kennedy_Krist
MIS 180 Chapter 8
ChurchDiLeva
Chapter 9 Essay
Entrepreneurship Essay 7+8+9+11
Entrepreneurship: Chapteer 11
Psychology 150: Exam 1
ashleyashleyb
Temporal and Infratemporal Fossae
cshicks1996
PS 308 Court Cases
Hannah_Simmons20
nickaleman1
An air-conditioning system operates at a total pressure of 1 atm and consists of a heating section and a humidifier that supplies wet steam (saturated water vapor) at $100^{\circ} \mathrm{C}.$ Air enters the heating section at $10^{\circ} \mathrm{C}$ and 70 percent relative humidity at a rate of $35 \mathrm{m}^{3} / \mathrm{min},$ and it leaves the humidifying section at $20^{\circ} \mathrm{C}$ and 60 percent relative humidity. Determine (a) the temperature and relative humidity of air when it leaves the heating section, (b) the rate of heat transfer in the heating section, and (c) the rate at which water is added to the air in the humidifying section.
Consider a long bar of square cross section (0.8 m to the side) and of thermal conductivity $2 \mathrm{W} / \mathrm{m} \cdot \mathrm{K}$. Three of these sides are maintained at a uniform temperature of 300 C. The fourth side is exposed to a fluid at 100 C for which the convection heat transfer coefficient is $10 \mathrm{W} / \mathrm{m}^{2} \cdot \mathrm{K}$. (a) Using an appropriate numerical technique with a grid spacing of 0.2 m, determine the midpoint temperature and heat transfer rate between the bar and the fluid per unit length of the bar. (b) Reducing the grid spacing by a factor of 2, determine the midpoint temperature and heat transfer rate. Plot the corresponding temperature distribution across the surface exposed to the fluid. Also, plot the 200 and 250 C isotherms.
Two steel plates are to be held together by means of 16-mm-diameter high-strength steel bolts fitting snugly inside cylindrical brass spacers. Knowing that the average normal stress must not exceed 200 MPa in the bolts and 130 MPa in the spacers, determine the outer diameter of the spacers that yields the most economical and safe design.
Why is it a good habit to look under, around, and inside the vehicle before you open the door? | CommonCrawl |
What should I do with a bunch of 16-17 year olds to get them interested in computer science?
I'm going to be involved with a sort of 'open day' at my university in a few weeks. As part of this time, I (along with a coworker) am being given a whole bunch of high-school level students for two hours, as well as a computer lab big enough to contain them all, and I have to do some kind of activity or set of activities with them to encourage them to do computer science (at my university, ideally, but in general also). I am at an absolute loss as to what to do here, and welcome any and all suggestions.
Koz RossKoz Ross
$\begingroup$ I'm not a teacher nor an expert, however I suggest you to teach them how to program a small puzzle-game (pick one whose generalization is NP-complete) covering aspects like: level generation, solution checking, automatic solution finding :-) $\endgroup$ – Vor Jun 24 '13 at 9:03
$\begingroup$ I liked the 2008 Royal Institution Christmas Lectures. You might want to try similar activities/demos. $\endgroup$ – melhosseiny Jun 25 '13 at 19:01
You can have them draw pictures using context-free grammar. context free art This also works for people who never programmed before and scales to experienced programmers. The basic language is easy enough to explain in maybe half an hour.
Learning something about geometry using Turtle graphics should be nice too. Logo was designed for children, so highschool students should have no problem. There are nice videos about children using Logo on youtube
If you can get your hands on some MindStorms robots, programming them is lots of fun.
There are a variety of programming games in which you program robots to fight each other, or assembly programs that try to overwrite each other in a virtual machine. Wikipedia on the topic, related stackoverflow question
You can also think about some kind of hardware project. Making a microcontroller blink a LED depending on the number of unread e-mails in your inbox for example.
Have them implement different maze generation algorithms, try to come up with criteria that make mazes "difficult for humans". If time permits extend the algorithms to include not only corridors but also rooms.
Buy a couple of Arduinos and LEDs. Let them program the blinkenlights.
adrianNadrianN
$\begingroup$ You may want to add the links to Khan Academy's computer programming tutorials to the list: It's a really cool/interactive addition to above. For example: khanacademy.org/cs/intro-to-animation/830742281 $\endgroup$ – PhD Jun 26 '13 at 5:05
$\begingroup$ +1 for the CFGs too - another version of the same notion that would be a good one to try and apply would be to try and do bush-drawing with an Iterated Function System; have them start with a rectangle, set up several more rectangles, and then repeat the 'contents' of the first rectangle (including all the subrectangles) into each of the subrectangles. You could have a digital version set up for comparison purposes. $\endgroup$ – Steven Stadnicki Sep 5 '13 at 21:25
Check out Computer Science Unplugged. From their site:
CS Unplugged is a collection of free learning activities that teach Computer Science through engaging games and puzzles that use cards, string, crayons and lots of running around.
The activities introduce students to underlying concepts such as binary numbers, algorithms and data compression, separated from the distractions and technical details we usually see with computers.
CS Unplugged is suitable for people of all ages, from elementary school to seniors, and from many countries and backgrounds. Unplugged has been used around the world for over fifteen years, in classrooms, science centers, homes, and even for holiday events in a park!
Pål GDPål GD
$\begingroup$ This is a nice suggestion, but I suspect the OP wants something that actually uses the computer lab that has been set aside. $\endgroup$ – András Salamon Jun 24 '13 at 9:06
$\begingroup$ Valid point. There should be a robocode project for people who don't know programming. $\endgroup$ – Pål GD Jun 24 '13 at 11:15
Most Computer Science undergraduates that I know consider learning to program to be the most painful and demoralizing part of their education. I would therefore stay away from anything that has to do with programming itself. As scphantm pointed out already, you probably also won't have time for this.
What your looking for is a two-hour exercise that satisfies two goals:
It's exciting enough to keep high-school graduates interested enough for two hours,
It will give them a glimpse of what Computer Science is, and hopefully get them interested in it.
The first goal is fairly independent of what you're actually going to show and has a lot more to do with being a good teacher/presenter. Good didactic practice, i.e. keeping your audience on their toes, letting them try small things in groups, giving them a breather every 15 minutes, and so on.
The second goal is a bit of a tricky bit, and what I think works best here is to take a problem which can be explained with their current knowledge, show how you can describe the solution algorithmically, and then show how that solution can be analysed and improved.
A good example is the shortest path problem in graphs, otherwise known as a GPS navigation system. No explanation needed. You can give them a small map with edge weights/length drawn in and a bunch of crayons to actually execute the algorithm as you describe it.
You can then start a discussion on how you would find a shortest path, and so on, let them try to formulate it as an algorithm, etc... Then you describe Dijkstra's algorithm, letting them colour the nodes as visited, tentative , and unvisited sets. Bam. You've got an algorithm!
If you still have time, you can go on to explain some details, i.e. stuff that we take for granted like finding the minimum in the set of tentative nodes. If you get this far, you can show the difference between linear search and a heap, and as a bonus you get to introduce $\mathcal O$-notation.
Having said all that, this is about as far as I would go. Stay away from the whole $P$ vs. $NP$ discussion with a ten foot pole. Although most Computer Scientists find this fascinating, most high-school students won't. I know this from experience. The key, in my opinion, is to start with a problem that they can understand, or relate to, and take it from there without the need for much introduction.
PedroPedro
If you only have 2 hours, you aren't going to get much coding done. Just learning syntax will be hard in that time, but there are plenty of things that can be done instead.
As a suggestion, try teaching them control flow and the importance of being specific:
Divide the class into 2, "robots" and the others "programmers".
Come up with a suitable challenge that requires some simple logic, looping etc. - there is an example below.
Have the "programmers" write out a list instructions that are given to the "robots"
Have the "robots" perform the instructions, but let the "robots" know that if the instructions are confusing they are allowed to stop, error or otherwise act up until the "programmer" stops and debugs them. Guaranteed, if given a chance to play up, a high schooler will.
As an example task, set up some tubs of different coloured balls, with corresponding coloured strips of paper elsewhere and enough small buckets for each robot/programmer pair. The task is to make the robot fill the bucket with balls, however to do so they can only take balls that match a specific strip of paper. If there are no more balls of that colour in a tub, then the robot must return their strip of paper and collect a new one.
This task requires conditional branching, looping, error handling, and thinking proceedurally. All things that a programmer needs to be good at, regardless of the language or activity.
Run something like this twice so the "robots" and "programmers" can swap. In between, give a small lesson on the above patterns of thinking, and they will perform much better in the second, close out with a small talk on the big events in programming - defeating the Nazis, going to the moon, the internet, and you'll have a room of potential and engaged programmers!
user8632user8632
$\begingroup$ Why can't I give this a +10? This is a great idea. $\endgroup$ – Xynariz Sep 5 '13 at 23:05
$\begingroup$ @Xynariz thanks! I've done it a few times with really small groups, and it generally ends up being equal parts fun and frustrating - however the frustration is at the misbehaving "robots" and not computers the kids don't yet understand. $\endgroup$ – user8632 Sep 5 '13 at 23:11
$\begingroup$ Sometimes people don't seem to understand that computers are very, very good at doing exactly what you tell them.. nothing more, nothing less. There's even a CyberChase episode about this ... (hides) $\endgroup$ – Xynariz Sep 5 '13 at 23:15
I've trained many programmers. If all you have is 2 hours, don't bother with teaching them how to code. Computer lab is unnecessary too. To go from zero to hello world, you will loose half the class and spend and hour 45 of your 2 hours dealing with glitches and get nothing done.
You may have more luck showing them what its like to think like a programmer. Give each of them a pad of paper and a pen and tell them to write a program in their own language on how to pick up their cell phone off the desk and make a phone call. Walk thru their answers. If your a code jocky of any salt you can step thru their programs and tell them how to make them better and how to accomidate the detail you need to have. Then ask them to write a program in their own words to do something else mundane. put your pants on, brush your teeth, open a door, whatever. Do the same with that program.
Give them a taste of what its like to THINK like a programmer. They will certainly get more out of that than you trying to teach them Python in 2 hours.
scphantmscphantm
You might try Alice. It's an IDE and API for 3D animation. It has all sorts of built in objects (rabbits, aliens, trees, buildings, ...) that you can place in an initial scene, with very high-level methods: like walk(north) (which animates the arms and legs while the character moves) and say("my name is Winky") which might cause a cartoon bubble to come out of the characters mouth.
It allows you to hook keyboard and mouse events so you can do things that are interactive.
The underlying programming language is Java, but the IDE gives you a graphical variant where you drag and drop parts of expressions into an editor window. (It won't let you create a syntax error.)
I think you could get it all preset up with a scene so that someone with no programming experience could do something interesting in just a couple of hours.
Wandering LogicWandering Logic
$\begingroup$ I'd be hesitant to use this approach; it might work for younger students, but highschool students are more likely to see it as childish. That said, this is a fast way to complete something in the time limit of 2 hours... $\endgroup$ – Izkata Jun 24 '13 at 18:06
Coding, even in a toy or graphical language, seems far-fetched in the course of an hour. Hell, I'm not sure I could pick up Alice again and do anything worthwhile in 2 hours. Maybe a weekend, but not 2 hours.
I'd suggest boiling CS down to the bare essentials: problem solving and analysis. Break the group into teams. Take 10 minutes to describe a few high-level computational problems. These should be easy problems that can be easily explained to people with little mathematical or CS background. Examples include:
Sorting lists
Finding minimal spanning trees
Computing (approximate) roots of integers
Take another 10 minutes for further discussion, and to explain the task. Each group is assigned one problem, for which they are to brainstorm solutions. The team will have a half hour to collaboratively figure out a solution or solution(s) to their assigned problem. Then, take an hour to go over the solutions in the entire group, and let the kids figure out whether they work or not, whether there's a faster/better way to solve the problem, etc.
If the kids don't land on a correct/optimal solution, that's OK. Don't just give the answers away, though - this is absolutely critical. The reason why kids don't do STEM anymore is because educators give kids the impression that everything's already figured out. It will take very mature counselors to allow the kids to try to solve these problems, and to succeed or fail, on their own. Getting the right answer's not the point. The point is giving the kids interesting problems and showing the kids what computer science is about: solving problems, and evaluating solutions for correctness and efficiency. Letting the kids come up with their own answers will give them a sense of ownership and help them feel engaged.
Of course, if the kids ask whether they got a correct/good/the best known answer, tell them the truth. But don't just give away the answers, unless they come up organically as a result of discussing the students' solutions. To summarize:
Give kids easy to understand but rich problems to explore.
Let the kids come up with their own solutions, providing only enough help to make sure the kids understand the problem at hand.
Discuss correctness/efficiency in a group setting, giving the groups a chance to explain their solutions. As counselors, you're free to take the discussion of correctness/efficiency as far as you think it can profitably go.
Under no circumstances should you present your own, or any well-known, solutions to the problem, unless they are basically identical to those provided by students. Do not make it seem like CS is a field where people have already figured out all the answers.
If possible, leave the kids feeling like they've learned something, but so that they still have questions: did they find the best answers? Are their other questions they can solve in a similar way? You might even provide them some undecidable problem in an easy-to-digest format to give them something to work on afterwards.
Patrick87Patrick87
$\begingroup$ You might even consider pitting teams against each other in a friendly competition. Give pairs of teams the same problem and see who comes up with the better solution. $\endgroup$ – Patrick87 Jun 24 '13 at 22:48
$\begingroup$ Competitions of that sort strongly discourage shy people and only bolster the egos of those who already know the stuff. This only strengthens the image of CS as being "only for the nerds". $\endgroup$ – adrianN Jun 25 '13 at 8:35
Im 17 now and I started programming around the time I turned 16. Im going to tell my story and than make some suggestions: My interest in programming started when I was watching a computer tech guy I called mess around with my registries and command prompt(even though he wanted 500$ to fix my BSODs and I didn't pay, I fixed them on my own) So I googled "command prompt language" and found out that there was something called "source code" and it allowed you to program. At the time I had no idea what c++ was, I don't even think I'd ever heard of it. So I went on cpp.com(very bad tutorials, you will learn bad and outdated practices) and started learning the basics. My mind went crazy and I actually learned that the virus I was infected with causing my problems was written in c++, which further interested me. I later started reading, learning assembly and other high level languages. I started first wanting to learn about malware and Graphics programming and I did.
This may sound bad but a lot of people my age are actually interested in the destructive side of programming. The first question I get from my friends when I tell them im pretty good with c++ is "Can you make viruses, change grades or hack games" Im not quite at that level...ive been studying dll injection recently though, so im getting there. Perhaps you could come up with something along the lines of malware that isn't dangerous or illegal but still interesting. (Maybe get the login information of a student from the school server) You could talk to them about how viruses and malicious software work too.
Develop a small game along the lines of pokemon and describe to them how games and game engines work. A lot of people would probably be surprised to know that in a lot of 2d games like this the character is not actually moving, the background is and the character is just using an animation, talk about random numbers ect. Come up with some 3D demonstrations too.
Try to stay away from explaining what the code does, try to tell them what the program itself does without talking about the code too much. In my experience that's an easy way to lose peoples attention especially if they don't understand the basics of the language. In fact, I would try not to really put the source code all out there because it could be fairly discouraging for someone to look at 500 lines of code and not understand any of it. Also if you have someone that youre demonstrating to that is like me they'll probably ask a chain of questions because they have a curious mind. i.e: Youre talking about random numbers, they ask what random numbers are for and where they come from...than you have to explain to them about electronic noise and how its random and everything, than youll probably find yourself in a situation where youre just like "I don't know". Questioning from teenagers can be pretty recursive...
Lego Mindstorms is a great Idea. If you don't wanna take the long route and use a major language it comes with a block style programming language that you can use. I figured out the language in about 30-40 minutes, everything lines up when you think about it
You could quickly develop an app and show it off, talk to them about the $$ that can come from app developing.
moonbeamer2234moonbeamer2234
generating fractals. they have a strong link to deep mathematics and computer graphics, and also they're naturally suited for parallelism. it illustrates complexity and emergent behavior, esp when you zoom to arbitrary scales, and has strong tieins to science and natural phenomena. it is not hard to write some parallel fractal code that runs on multiple machines. one experiment is to have each machine display the random lines that it processed (eg "slave" machines that process lines from a queue) and then a central machine display the combined results.
lego robotics (or other robotics kits eg stamp). mindstorms is a toy, but it can be a very advanced one serving as a tangible demonstration of abstract concepts. the software that can be run on them can be very complex, and they can have complicated sense-think-act loops/algorithms. there are many good books of constructions. also impressive are the Rubiks cube solvers, recently breaking the world record.
raspberry pi is a new inexpensive platform that is seeing a lot of interest and use. it can be used to demonstrate linux programming, robotics etc, and has HD output, etc. see eg Southhampton raspberry pi supercomputer with a Lego rack.
Logo as mentioned in the other answer is an old classic. another newer angle is game programming eg with a new emerging popular language called Scratch (invented at MIT). it can teach many natural/advanced CS topics.
heres another angle. there are so many interesting open problems or emerging technologies in computer science at the frontiers of scientific understanding that can spark curiosity/wonder, ie exploration of nearby terra incognita. if you raise the problems and then have the class participate in a discussion about the ramifications of solutions, that can spark significant interest/inspiration. [since you mention the availability of the computer lab, it would also be possible to creatively come up with some hands-on computer exercises related to these areas.]
this can take on a sci-fi feeling but in CS like no other field, it turns what was once scifi into reality in a short amount of time. they also can be controversial and timely, connecting with today's headlines, and students can begin to grasp how ubiquitous CS is in our world/society, and how significant it is, when broadly interpreted. here are a few big ones:
DNA to protein folding problem. is there an algorithm to calculate it accurately?
artificial intelligence in general. is it possible? are there ethics involved?
robotics has various key emerging areas. eg autonomous autos/driving. its on the nearterm horizon. how will this affect society? the video of the DARPA contest from not too long ago is impressive. Kurzweil's writing has a lot of stuff to get into. drones are a complicated topic rarely openly discussed and will be increasingly used domestically. the mars rovers are extraordinary technology and there are amazing stories behind it, such as how the systems had to be debugged remotely—interplanetarily when they failed.
IT-based surveillance systems to detect crime/terrorism are heavily in the news lately.
P vs NP problem. it sounds abstract but it can be presented in a very tangible way such as talking about how video games are NP complete, and the problem can be visualized as the size of circuits required to solve NP complete problems, and the wild implications of P=NP, and how cryptography & secure transactions depend on the P$\neq$NP assumption. oh yeah and dont forget to mention the hanging $1M prize as close to what is called in [theory] teaching, "motivation"!
the Higgs boson could not have been discovered and the supercollider cannot function at all without large CS-based systems for analyzing the "big data".
Moore's law. how far will it continue? how much has it already affected society/humanity?
Quantum computers. are they possible? will they be faster? will they be lowcost or always unweildy? Dwave is a colorful case study, there is a great SciAm article by Aaronson, etc
Google pagerank algorithm is one of the multibilliondollar wonders of modern computer science. will it be extended? how does spam filtering work? the company seems to be moving toward analyzing images, etc.
algorithmic/high frequency trading now moves massive amounts of trading volume/value. is it good/bad? is it increasing/decreasing? will it be regulated in the future? what kind of computational arms race is involved?
supercomputers are massive, solving amazing problems, and getting bigger. are there limits? what do they compute and what will they compute in the future? somewhat related, Big Data and datamining.
social networking sites have had huge implications in less than a decade of growth. they are involved in fueling popular uprisings eg Arab spring and Occupy wall st. what is their future?
I have a proposition that
focuses on computer science (not programming or auxiliary),
starts with a premise most kids know and
has actually been tried and works.
We have been holding small workshops with high school students about Minesweeper. The workshop would roughly go like this:
Let's play the game a bit (most know it).
What have we just done? What is the problem we try to solve? Can we formulate general rules?
This will usually take a while. Kids are not used to formulate problems in terms of input and output lest general rules for solving them. Those who have programed before will appreciate the effort; referencing "spaghetti code" can help. Nevertheless, the rules will be simple most of the time, considering only one cell at a time.
Exhibit problems with the rules.
At this point, you want to introduce a Minesweeper simulator. The one by Bayer, Snyder and Chouiery is not perfect but allows you to exhibit carefully designed scenarios.
Improve the ruleset to cover more scenarios.
This will typically lead the students to investigate more and more cells together. You can also nudge them towards "solve all" approaches like expressing the information at hand as a linear equation system -- this comes up if you try to express the available information in mathematical terms. Students already know how to solve such systems!
Note limitations.
First, there are scenarios that have no (deterministic) solution. Furthermore, we can contrast brute-force with our developed strategies. Can we trade-off speed versus power? If the equation-system approach turns up, note that we can only solve this efficiently over the reals, but we need binary answers. It's not too hard to build scenarios which lead to huge runtimes (we used computer algebra to illustrate).
Depending on the group, this approach allows to cover multiple principles of computer science in a natural way: defining problems, describing general algorithms, iterative problem-solving as well as issues of computability and complexity can all be touched upon.
Feedback by students has been overall positive; they feel engaged and express interest in the concepts. It is important let them do most of the work, only carefully nudging them in the desired direction by asking pointed questions.
Raphael♦Raphael
69k2727 gold badges156156 silver badges352352 bronze badges
you have a lot of things to do ... but one thing that would seem so exiting "money" , so present the "P≠NP" question and the seven millennium prize , when I was in middle school I read about it although I didn't knew the notations the only thing that I understand : there is big prize and question ! other things would be presenting the connection of mathematics and computer science like : solving equations , checking solutions using computers .
other things I would suggest is presenting Alan turing "the father of computer science" and tell his story. the last thing that I suggest is zero knowledge proofs and the game "where is waldo?" and playing without cheating and cryptography and cyber attacks .
Fayez Abdlrazaq DeabFayez Abdlrazaq Deab
Do anything with facebook, they love it. Maybe this is to difficult for beginners, but u clould let them draw connextions graphs, that show how their profiles are connected to each other. I would recommend Javascript as programming language.
Not the answer you're looking for? Browse other questions tagged education or ask your own question.
computer science in the movies as an educational angle
What future working opportunities do computer science students have?
Ideas for CS-related challenge for teams of high school students?
What parts of linear algebra are used in computer science?
What programming languages always learned in computer science B.Sc.? | CommonCrawl |
Existence of weak solutions for a sharp interface model for phase separation on biological membranes
Numerical approximation of von Kármán viscoelastic plates
January 2021, 14(1): 321-330. doi: 10.3934/dcdss.2020326
Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system
Karoline Disser
FB Mathematik, TU Darmstadt, Schlossgartenstr. 7, 64293 Darmstadt, Germany
Dedicated to Alexander Mielke on the occasion of his 60th birthday
Received April 2019 Revised November 2019 Published April 2020
We prove a global existence, uniqueness and regularity result for a two-species reaction-diffusion volume-surface system that includes nonlinear bulk diffusion and nonlinear (weak) cross diffusion on the active surface. A key feature is a proof of upper $ L^{\infty} $-bounds that exploits the entropic gradient structure of the system.
Keywords: Volume-surface system, reaction-diffusion system, $ L^{\infty} $-estimates, entropy method, nonlinear diffusion.
Mathematics Subject Classification: Primary: 35K61, 35K57; Secondary: 35B45, 35A01.
Citation: Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326
D. Bothe, On the multi-physics of mass-transfer across fluid interfaces, arXiv: 1501.05610. Google Scholar
D. Bothe, M. Köhne, S. Maier and J. Saal, Global strong solutions for a class of heterogeneous catalysis models, J. Math. Anal. Appl., 445 (2017), 677-709. doi: 10.1016/j.jmaa.2016.08.016. Google Scholar
H. Brézis, Opérateurs Maximaux Montones et Semi-groupes de Contractions Dans les Espaces de Hilbert, North-Holland Publishing Co., Amsterdam, 1973. Google Scholar
K. Disser, Well-posedness for coupled bulk-interface diffusion with mixed boundary conditions, Analysis, 35 (2015), 309-317. doi: 10.1515/anly-2014-1308. Google Scholar
K. Disser, Global existence, uniqueness and stability for nonlinear dissipative bulk-interface interaction systems, arXiv: 1703.07616, J. Differential Equations, accepted for publication (2020). Google Scholar
K. Disser, M. Meyries and J. Rehberg, A unified framework for parabolic equations with mixed boundary conditions and diffusion on interfaces, J. Math. Anal. Appl., 430 (2015), 1102-1123. doi: 10.1016/j.jmaa.2015.05.041. Google Scholar
K. Fellner, E. Latos and B. Q. Tang, Well-posedness and exponential equilibration of a volume-surface reaction-diffusion system with nonlinear boundary coupling, Ann. Inst. H. Poincaré Anal. Non Linéaire, 35 (2018), 643-673. doi: 10.1016/j.anihpc.2017.07.002. Google Scholar
J. R. Fernández, P. Kalita, S. Migórski, M. C. Muñiz and C. Nuñéz, Existence and uniqueness results for a kinetic model in bulk-surface surfactant dynamics, SIAM J. Math. Anal., 48 (2016), 3065-3089. doi: 10.1137/15M1012785. Google Scholar
J. Fischer, Weak-strong uniqueness of solutions to entropy-dissipating reaction-diffusion equations, Nonlinear Anal., 159 (2017), 181-207. doi: 10.1016/j.na.2017.03.001. Google Scholar
A. Glitzky, An electronic model for solar cells including active interfaces and energy resolved defect densities, SIAM J. Math. Anal., 44 (2012), 3874-3900. doi: 10.1137/110858847. Google Scholar
A. Glitzky and A. Mielke, A gradient structure for systems coupling reaction-diffusion effects in bulk and interfaces, Z. Angew. Math. Phys., 64 (2013), 29-52. doi: 10.1007/s00033-012-0207-y. Google Scholar
A. J{ü}ngel, The boundedness-by-entropy method for cross-diffusion systems, Nonlinearity, 28 (2015), 1963-2001. doi: 10.1088/0951-7715/28/6/1963. Google Scholar
F. Keil, Complexities in modeling of heterogeneous catalytic reactions, Comput. Math. Appl., 65 (2013), 1674-1697. doi: 10.1016/j.camwa.2012.11.023. Google Scholar
S. Kjelstrup and D. Bedeaux, Non-equilibrium Thermodynamics of Heterogeneous Systems, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2008. doi: 10.1142/9789812779144. Google Scholar
A. Mielke, Thermomechanical modeling of energy-reaction-diffusion systems, including bulk- interface interactions, Discrete Contin. Dyn. Syst. Ser. S, 6 (2013), 479-499. doi: 10.3934/dcdss.2013.6.479. Google Scholar
M. Pierre, Global existence in reaction-diffusion systems with control of mass: A survey, Milan J. Math., 78 (2010), 417-455. doi: 10.1007/s00032-010-0133-4. Google Scholar
Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321
Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400
Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316
Masaharu Taniguchi. Axisymmetric traveling fronts in balanced bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3981-3995. doi: 10.3934/dcds.2020126
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033
Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283
Shin-Ichiro Ei, Hiroshi Ishii. The motion of weakly interacting localized patterns for reaction-diffusion systems with nonlocal effect. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 173-190. doi: 10.3934/dcdsb.2020329
Nabahats Dib-Baghdadli, Rabah Labbas, Tewfik Mahdjoub, Ahmed Medeghri. On some reaction-diffusion equations generated by non-domiciliated triatominae, vectors of Chagas disease. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021004
Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334
Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021019 | CommonCrawl |
Basic Quantum Mechanics Concepts with Continuous Spectra
The following are a couple excerpts of the first chapter of Sakurai and Napolitano, Modern Quantum Mechanics, 2nd edition:
Prior to these formulas, the text discusses the fundamental mathematics of quantum mechanics with finite dimensional state spaces, in particular spin $\frac{1}{2}$ systems. The left hand sides of the formulas above are associated with cases involving finite dimensional (or at least countable) state spaces and the right hand sides are corresponding equations for continuous spectra.
I understand everything on the left hand side. In particular:
$|\alpha\rangle$ is a vector in a separable Hilbert space.
$\langle \cdot | \cdot \rangle$ is the Hilbert space inner product
$\langle \alpha | \beta \rangle \in \mathbb{C}$.
But I'm confused by the formulas on the right hand side. What are the types of objects involved? For example, should one still think of $\langle \cdot | \cdot \rangle$ as a Hilbert space inner product yielding a complex number? If so, how can one interpret the right hand side of (1.6.2a) without some high intensity hand waving? The $\delta(\xi'-\xi'')$ expression suggests one should think of $\langle \xi'|\xi''\rangle$ as some type of linear operator, not an ordinary complex number.
I am also tempted to make the integrals on the right hand side disappear by thinking of something (maybe $|\xi'\rangle$?) as an integral operator as studied in functional analysis. Composing or applying integral operators may then yield integral expressions, but the linear operator perspective would be more fundamental and enlightening.
Any help from domain experts would be greatly appreciated.
functional-analysis hilbert-spaces quantum-mechanics
Will NelsonWill Nelson
$\begingroup$ I recommendation working through the appropriate sections of Chapter 1 of Shankar, if you can get your hands on it. For me it was a huge help transitioning from knowing how do Dirac notation to actually understanding what I was doing. $\endgroup$ – David H Jun 13 '14 at 3:51
$\begingroup$ You can make rigorous sense of $d\xi \left|\xi \right\rangle \left\langle \xi \right|$ as a projection valued measure (en.wikipedia.org/wiki/Projection-valued_measure); in particular, $\int_a^b d\xi \left|\xi \right\rangle \left\langle \xi \right|$ is the orthogonal projection onto the subspace of all states for which your observable takes values in the interval $[a,b]$. $\endgroup$ – Branimir Ćaćić Jun 13 '14 at 5:44
$\begingroup$ As for $\left\langle \xi \right|$, the way to interpret this is as a continuous functional on some dense subspace $\mathcal{D}$ of your Hilbert space $H$, endowed with a finer topology—for further details of this, read up on rigged Hilbert spaces en.wikipedia.org/wiki/Rigged_hilbert_space. $\endgroup$ – Branimir Ćaćić Jun 13 '14 at 5:45
Sometimes an example is most helpful. I'll use the conventions of Mathematics because I'm locked in on those. The Fourier transform involves the eigenfunctions $e_{\lambda}(t)=\frac{1}{\sqrt{2\pi}}e^{i\lambda t}$ of $\frac{1}{i}\frac{d}{dt}$. $$ x^{\wedge}(\lambda)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}x(t)e^{-i\lambda t}\,dt = ``(x,e_{\lambda})". $$ And the inverse Fourier transform of the above gives back the original function, and may be written in a form analogous to finite-dimensional eigenfunction expansions: $$ x(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}x^{\wedge}(\lambda)e^{i\lambda t}\,d\lambda = \int_{-\infty}^{\infty} (x,e_{\lambda})e_{\lambda}\,d\lambda. $$ Technically, the $e_{\lambda}$ are not eigenfunctions because they're not in $L^{2}(\mathbb{R})$, but they do satisfy $\frac{1}{i}\frac{d}{dt}e_{\lambda}=\lambda e_{\lambda}$ as functions. The expansion on the right looks like a natural generalization of a finite orthonormal expansion $\sum_{n=1}^{N}(x,e_{n})e_{n}$, and that's the beauty of this notation. For the operators of Quantum, it is always true that wave packets (i.e., integral "sums" of the $|\lambda\rangle$ with respect to $\lambda$ over intervals of $\lambda$) are in $L^{2}(\mathbb{R})$. For example, $$ \int_{a}^{b}e_{\lambda}\,d\lambda = \frac{e^{itb}-e^{ita}}{t} $$ is a function in $L^{2}(\mathbb{R})$, and the following suggestive formula holds for $-\infty < a < b < \infty$: $$ \frac{1}{i}\frac{d}{dt}\int_{a}^{b}e_{\lambda}\,d\lambda = \int_{a}^{b}\lambda e_{\lambda}\,d\lambda\approx \frac{a+b}{2}\int_{a}^{b}e_{\lambda}\,d\lambda. $$ If $b-a\approx 0$, then $\int_{a}^{b}e_{\lambda}\,d\lambda$ is an "approximate" eigenvector of $\frac{1}{i}\frac{d}{dt}$ with eigenvalue $(a+b)/2$. The integral is definitely in $L^{2}(\mathbb{R})$, but the precision of it being an eigenvector with a definite eigenvalue is lost, though well-approximated over any small $\lambda$ interval $[a,b]$. There is an orthogonality: $$ \int_{a}^{b}c(\lambda)e_{\lambda}\,d\lambda \perp \int_{a'}^{b'}e_{\lambda}d(\lambda)\,d\lambda = 0 \mbox{ whenever }[a,b]\cap[a',b']\mbox{ has $0$ length.} $$ More generally, one has the suggestive integral inner-product formulae, $$ \left(\int_{a}^{b}c(\lambda)e_{\lambda}\,d\lambda, \int_{a'}^{b'}d(\lambda)e_{\lambda}\,d\lambda\right) = \int_{[a,b]\cap[a',b']}c(\lambda)\overline{d(\lambda)}\,d\lambda,\\ \left\|\int_{a}^{b}c(\lambda)e_{\lambda}\,d\lambda\right\| = \int_{a}^{b}|c(\lambda)|^{2}\,d\lambda,\\ x(t) = \int_{-\infty}^{\infty}(x,e_{\lambda})e_{\lambda}(t)\,d\lambda,\\ (x,y) = \int_{-\infty}^{\infty}(x,e_{\lambda})(e_{\lambda},y)\,d\lambda. $$ I leave it to you to write these things in the notation of Physics.
There are definite issues concerning how (if possible) to parameterize $\lambda\rightarrow e_{\lambda}$ in a smooth way, at least when it comes to the general theorem. And there are issues of multipliciity, meaning that it may take multiple eigentheads $\lambda\rightarrow e_{\lambda}$, $\lambda\rightarrow e_{\lambda}'$, etc., to get the desired, full representation. This is often ignored because it doesn't pop up in simpler classical problems. But such issues need to be addressed at some point. Also, sometimes you need a mix of integral "sums" and discrete sums. It's possible to have normalization densities that are singular-continuous measures, though this does not occur in basic problems and should be ignored at an elementary level.
I'm a physicist and not a mathematician so I can't answer all your questions, but perhaps I can help a bit.
As far as I know state vectors $\mid \psi \rangle$ are always the elements of a separable Hilbert space though there can often be additional mathematical structure.
The vectors $\mid \xi \rangle $ are parametrized by a real valued variable $\xi$. This isn't really all that different from the finite case where the basis vectors are parameterized by a discrete integer variable. In both cases the inner product between two basis vectors is a complex valued function in two variables,
$$\langle \xi \mid \xi' \rangle = f(\xi,\xi'),$$
As you noted above the dirac delta function is used all over the place in the formal theory quantum mechanics. This is not really a function, but rather a generalized function which is an equivalence class of ordinary functions. This is similar to the way in which real numbers are equivalence classes of sequences of rational numbers.
Before this theory of generalized functions existed Von Neumann came up with the theory of rigged Hilbert Spaces as the underlying mathematical structure of quantum mechanics. I don't personally know much about this other than the fact that the main goal of the theory was to avoid the use of the delta function. His book on quantum mechanics is probably going to be one of the most mathematically rigorous treatments of the theory that you will find.
One example of an inner product which results in an ordinary complex number is the inner product between the position eigen-states and the momentum eigen-states.
$$ \langle x \mid p \rangle = \frac{1}{\sqrt{2\pi \hbar} } e^{ipx/\hbar} $$
As far as interpreting the integrals and integral operators that should be fine. I do think that getting used to the notation gives a greater insight to the theory though.
SpencerSpencer
Not the answer you're looking for? Browse other questions tagged functional-analysis hilbert-spaces quantum-mechanics or ask your own question.
Expectation value of pure state in quantum mechanics
Correct spaces for quantum mechanics
Probability and Quantum mechanics
Can we describe quaternions using bra-ket in quantum mechanics?
Symmetry transformations in quantum mechanics.
Quantum Mechanics state space
How to rigorously understand continuous bases?
Understanding Bell's inequality vs. quantum mechanics
How do outer products differ from tensor products?
Quantum mechanics: total orthonormal sets & position/momentum space | CommonCrawl |
@FlipPhysics
21 Mar 2022, 00:10 → 25 Mar 2022, 17:05 Europe/Madrid
Salon de actos del IATA
Carrer del Catedràtic Agustín Escardino Benlloch, 7, 46980 Paterna, Valencia
The @FlipPhysics workshop seeks to bring together the community of physicist working in the areas of Nuclear, Particle Physics and its Applications, especially women, and also (under)-graduate, PhD students, and young researchers, who have the opportunity to be introduced to several scientific topics through (mostly) women who have been successful in the field.
Some of the topics that will be covered are:
- Nuclear and Particle Physics, and some of their applications (medical physics, quantum computing)
- Machine Learning applied to Physics
- Dark Matter
- Gravitational waves
- Astroparticle Physics
- Cosmology
And there will be also these activities:
- Special session for undergraduate students
- Virtual tours on experimental facilities
- Sessions on Gender Equality with experts
- Sessions on research plan writing and public speaking
Aashish Rana
Adam Griffiths
Adriana Bariego Quintana
Adrián Cabezas Arjona
Aida Garrido Gomez
Alberto Torralba Torregrosa
Aleesha KT
Alejandra Aguirre-Santaella
Alejandro Alonso
Ali Esquembre
Alicia M Sintes
Alicia Reija
Almudena Arcones
Alonso Císcar Taulet
Amador García Lorenzo
Amor Romero Maestre
Ana Arranz Asensi
Ana Isabel Garrigues Navarro
Ana Isabel Morales Lopez
Ana Quintana Garcia
Andrea Gonzalez-Montoro
Andrea Vioque-Rodríguez
Andres Renteria
Andreu Angles Castillo
Androniki Dimitriou
Ani Aprahamian
Anna Kawecka
Armando Perez
ARNAU BAS I BENEITO
Astrid Hiller Blin
Aurelio Amerio
Avelino Vicente
Barbara Alvarez Gonzalez
Beatrice Giudici
Beatriz Romeo
Belén Gavela
Berta Rubio
Capitolina Díaz
Carla Marin Benito
Carlos Escobar Ibáñez
Carlos Rosa
Carmen Angulo
Carmen Galotto
Carmen Romo Luque
Cesar Domingo-Pardo
Clara Alvarez Luna
Clara Cuesta
Clara Freijo Escudero
Clara Murgui
Claudia Hagedorn
Danish Farooq Meer
David Aguayo
David Francisco Rentería Estrada
David Rodríguez García
Dimitra Tseneklidou
Ebba Ahlgren Cederlöf
Eleftheria Solomonidi
Eleonora Di Valentino
Elisabet Galiana
Elsa Prada
Emanuela Musumeci
Emma Torró Pastor
Eulogio Oset Baguena
eunice asiedu
Farnaz Kazi
Federica Pompa
Finia Jost
Finn Kohl
FIRDOUS HAIDAR
Florencia Castillo
Francesca Calore
Francesco Capozzi
Francisco Torrens
Gabriela Barenboim
Gabriela Moreno
Gabrijela Zaharijas
Gaetana Anamiati
Gerard Navo
Giacomo Landini
Gracia García Arteaga
Guillem Arbona Ferrer
Guillermo Javier Serón Rodrigo
Gustavo Hazel Guerrero Navarro
Hammad Rahseed
Hanaan Shafi
Hareesh Thuruthipilly
Helena Ubach Raya
Hemantika Sengar
Hien Van
Irene Sánchez Carvajal
Irene Torres-Espallardo
Isabel Cordero-Carrión
Isabel Fernández
Ismael Guillén
Ivana Lihtar
Jeevika Senthil Kumar
Joanna Sobczyk
Joaquin López Herraiz
Jorge Terol Calvo
Josep Navarro González
Josipa Diklić
José Manuel Calatayud
Juan Miguel Nieves Pamplona
Judit Pérez-Romero
Judita Mamuzic
Juliana Carrasco
Kabita Kundalia
Kathrin Wimmer
Katyayni Tiwari
Kelsang Dorjee Gurung
Kevin Monsalvez Pozo
Kiriaki Prifti
Kwame APPIAH
Laetitia Canete
Laura Lopez Honorez
Laura Pérez-Molina
Laura Renth
Laura Tolos
Lopamudra Nayak
Lorenzo Varriale
Lotta Jokiniemi
Luana Modafferi
Lucia Caceres
Lucía Castells Tiestos
Luis Carlos Garcia Moreno
Maite Gandia
Maite Mateu-Lucena
Malika Kaushik
Manuella Vincter
Mar Barrantes Cepas
Marc Pérez Safont
Marcos Martinez
Mari-Carmen Banuls
Maria de Lluc Planas Llompart
Maria Jose Gomez Calero
Maria Moreno Llácer
Maria Olalla Olea Romacho
Maria Vittoria Managlia
Mariam Chitishvili
Mariam Tórtola
Mariia Didenko
Marina Tomova
Marta Seror
Martina Delgado-Pinar
María Antonia Lledó
María Benítez Galán
María Luisa Sarsa
Miguel Albaladejo
Miquel Miravet-Tenés
Miriam Rodríguez Sánchez
Miryam Martínez-Vara
Molina Bueno Laura
NARESH KUMAR PATRA
Nataly Díaz Rivera
Nelly Carolina Vega Muñoz
Nerea Encina Baranda
Neus Penalva Martínez
Nicola Farmer
Ninetta Saviano
Nishu Goyal
Norma Selomit Ramírez Uribe
Nuria Fuster
Nuria Rius
Olga Mena Requejo
Omar Medina
Ophir Ruimi
Pablo Galve
Pablo Martínez-Agulló
Pablo Martínez-Miravé
Pablo Muñoz Candela
Pablo Soriano Fajardo
Pas García-Martínez
Pau Hostalet
Paula Bañuls Saiz
Paula Talavera Capilla
Pilar Coloma
Prim Patrawan Pasuwan
Rajat RANA
Raquel Molina Peralta
Rasmi Hajjar
Raul Cantos
Raul Martinez Pavon
Rebeca Beltrán Lloría
Saboura sadat Zamani
SAI KUMAR CHINTHAKAYALA
Salvador Marti Garcia
Salvador Mengual Sendra
Samantha López Pérez
Samuel Santos-Pérez
Sandipan Bhattacherjee
Santiago Gonzalez de la Hoz
Santiago Paz Castro
Sara Arriolabengoa Zazo
Sara Martín Luengo
Sara Porras Bedmar
SATYABRATA MAHAPATRA
Sema Kucuksucu
Silvia Pérez Cámara
Sofia Gil
Soni Devi
Sonja Orrigo
Stefan Sandner
Susana Cabrera Urbán
Tamara Pardo
Tanja Kirchner
Unnati Gupta
Valentina De Romeri
Veronica Sanz
Victoria Sánchez Sebastián
Viktoria Kraxberger
Viviana Gammaldi
Vladimir Pastushenko
Víctor Montesinos Llácer
Yasmin Naghizadeh
Zeynalov Shakir
Óscar Soriano Masiá
[email protected]
Monday, 21 March
Mon, 21 Mar
Tue, 22 Mar
Wed, 23 Mar
Thu, 24 Mar
Fri, 25 Mar
Registration Main entrance of IATA
Main entrance of IATA
Presentation of the Workshop
Conveners: Adela Valero (UV), María Jesús Añón (CSIC), Prof. Nuria Rius (IFIC(CSIC-UV)), Dr. Raquel Molina Peralta
presentacion.pdf
Nuclear Physics Salón de Actos del IATA
Salón de Actos del IATA
Convener: Dr. Anabel Morales (Chair) (IFIC)
Origin of heavy elements: r-process in neutron star mergers and core-collapse supernovae 35m
Our understanding of the origin of heavy elements by the r-process has made great progress in the last years. In addition to the gravitational wave and kilonova observations for GW170817, there have been major advances in the hydrodynamical simulations of neutron star mergers and core-collapse supernovae, in the microphysics included in those simulations (neutrinos and high density equation of state (EoS)), in galactic chemical evolution models, in observations of old stars in our galaxy and in dwarf galaxies. This talk will report on recent breakthroughs in understanding the extreme environment in which the formation of the heavy elements occurs, as well as open questions regarding the astrophysics and nuclear physics involved. Observations of old stars and meteorites can strongly constrain the astrophysical site of the r-process, once the nuclear physics uncertainties of extreme neutron-rich nuclei are reduced by experiments and by improved theoretical models.
Speaker: Prof. Almudena Arcones (TU Darmstadt)
Arcones_Flip3.pdf
Nuclear spectroscopy for understan- ding the nuclear forces 35m
Nuclear forces that govern the atomic nuclei are still not fully understood. The state-of-the-art nuclear theories are dealing with the complexity of the nuclear systems governed by many
degrees of freedom. In order to shed light to these advance models, nuclear spectroscopy has been proven to be of outmost importance to obtained experimental information of key nuclear observables.
From the etymology, spectroscopy is composed of spectro- which refers to optical spectra and -scopy meaning observation. Therefore, nuclear spectroscopy involves all type of experiments where radiation is emitted/absorbed by the nucleus.
This talk will review some of the key experiments on nuclear spectroscopy that have contributed to the development of our understanding of the nuclear forces.
Speaker: Dr. Lucía Cáceres (CEA-GANIL)
Caceres_PlipPhy22.pptx
Social break 20m IFIC Cafeteria
IFIC Cafeteria
Coffee and pastries
Proton resonances in meson production 35m
The description of the proton properties from its quark and gluon substructure is a topic which is far from being well understood. The strong force binding together the constituents behaves remarkably differently at high and low energies.
The main experimental tool to probe the proton is electron scattering off proton targets. At high energies, the electrons break up the protons and the underlying physics is well understood in terms of the theory that describes the strong force between quarks and gluons. However, at low energies the connection to the physics of the constituents becomes obscured. In the data spectrum, many resonances appear as interfering and overlapping peaks whose description is highly convoluted. In addition, many of them do not follow the usual quark-antiquark (meson) or 3-quark (baryon) frameworks, thus being dubbed as exotic resonances.
In this talk, I focus on the theoretical description of the resonant contributions to the proton structure. I also give emphasis to the exotic states, in view of the ongoing and near-future high-luminosity experiments designed for their search and improved understanding.
Speaker: Dr. Astrid Hiller Blin (Eberhard Karls Universtiät Tübingen)
ProtStruc.pdf
MYRRHA, A New Large Research Infrastructure in Belgium for Applications in Nuclear Energy and Nuclear Physics 35m
SCK CEN is at the forefront of Heavy Liquid Metal (HLM) nuclear technology worldwide with the development of the MYRRHA accelerator driven system (ADS) since 1998.
MYRRHA is conceived as a flexible fast-spectrum research irradiation facility cooled by Lead Bismuth Eutectic (LBE). The nominal design power of the MYRRHA reactor is 70 MWth. It is driven in sub-critical mode by a high power proton accelerator based on LINAC technology delivering a 600 MeV proton beam of 4 mA intensity in Continuous Wave (CW) mode. The choice of the LINAC technology is dictated by the unprecedented reliability level required by the ADS application.
MYRRHA is proposed to the international community of nuclear energy and nuclear physics as a large research infrastructure to serve as a multipurpose fast spectrum irradiation facility for various fields of research such as transmutation of High Level Waste (HLW), material and fuel for Gen IV reactors, materials for fusion energy, innovative radioisotopes development and production, and fundamental physics.
MYRRHA is serving since 1998, started with the FP5 EURATOM framework, as the backbone of the Partitioning & Transmutation (P&T) strategy of the European Commission and is fostering the R&D activities in EU related to the ADS and the associated HLM technology developments. MYRRHA was identified by SNETP (www.snetp.eu) as the European Technology Pilot Plant for the Lead-cooled Fast Reactor.
In 2015, SCK CEN and the Belgian federal government decided to implement the MYRRHA facility in three phases to minimise the technical risks associated to the needed accelerator reliability.
On September 7, 2018 the decision was taken by the Belgian federal government to build this large research infrastructure.
In this talk, I will introduce the basis of an ADS, the MYRRHA main technological choices and its pan-European dimension. I will focus on the project current status and, in particular, on the MYRRHA phase I, MINERVA, consisting of the first 100 MeV of the LINAC and its related targets facility.
Speaker: Dr. Carmen Angulo (SCK-CEN Belgian Nuclear Research Centre)
4_Angulo.pdf 4_Angulo.pptx
Strongly interacting matter in the laboratory and stars 35m
The interplay between the experimental results generated in terrestrial laboratories and the observations coming from stellar objects is of fundamental importance for offering solutions to long-standing puzzles in the physics of strongly interacting matter under extreme conditions. In this talk I will present the work I have been developing over the years regarding dense matter at finite temperature in two main fields: the properties of hadrons in a hot and dense medium, and the study of different phases of dense matter in neutron stars.
Speaker: Dr. Laura Tolós (ICE-Barcelona)
tolos.pdf
Reaching out Exotic Nuclei 35m
Speaker: Prof. Berta Rubio (IFIC-CSIC)
Flipphysics.pptx
Lunch 1h 55m IFIC Cafeteria
Convener: Dr. Sonja Orrigo (Chair) (IFIC)
Thermal resonances and chiral symmetry restoration. 13m
We analize the role played by the thermal f0(500) state or σ in chiral symmetry restoration and propose an alternative sector (related with the thermal K∗0(700) or κ) to study O(4)×UA(1) restoration. The temperature corrections to the spectral properties of those states are included in order to provide a better description of the scalar susceptibilities χS and χκS around the transition region. We use the Linear Sigma Model to establish the relation between χS and the σ propagator, which is used as a benchmark to test the approach where χS is saturated by the f0(500) inverse self-energy. Within such saturation approach, a peak for χS around the chiral transition is obtained when considering the f0(500) generated as a ππ scattering pole within Unitarized Chiral Perturbation Theory at finite temperature. On the other hand, we show, using Ward Identities, that χκS develops a maximum above the QCD chiral transition, above which it degenerates with χKP in the O(4)×UA(1) restoration region. Such χκS peak can be described when it is saturated with the K∗0(700), which we compute in Unitarized Chiral Perturbation Theory through πK scattering at finite temperature. That approach allows us in addition to examine the χκS dependence on the light- and strange-quark masses. Finally, a comparison with the Hadron Resonance Gas is also studied in this context.
Speaker: Mrs. Andrea Vioque Rodriguez (UCM)
Thermal resonances and chiral symmetry restoration..pdf
Quark mass dependence of hadron resonances 13m
We study the dependence of hadronic resonances on the mass of quarks through the analysis of data from QCD lattice simulations form various collaborations. Using Machine Learning techniques as the LASSO algorithm we fit lattice data in order to extrapolate them to the physical point and extract the results for the quark mass dependence for exotic resonances like Ds0 and Ds1.
Speaker: Mr. Fernando Gil Domínguez (UV)
FlipPhysics Fernando Gil Domínguez.pdf
New ways to shed light on neutrinoless double-beta decay 13m
Observing neutrinoless double-beta (0νββ) is undoubtedly one of the most anticipated breakthroughs in modern-day neutrino, nuclear and particle physics. When observed, the lepton-number-violating process would provide unique vistas beyond the Standard model of particle physics. However, the expected decay rates depend on coupling constants, whose effective values are under debate, and nuclear matrix elements (NMEs) that are poorly known [1]. Hence, it is crucial to gain a better understanding of the underlying theory in order to plan future experiments and to extract the beyond-standard-model physics from them.
I will discuss how the theory predictions can be improved either directly by investigating corrections to the 0νββ decay matrix elements, or indirectly by studying related processes that can be or have been measured. First, I will introduce our recent work on a new leading-order correction to the standard 0νββ-decay NMEs in heavy nuclei [2]. Then, I will discuss the relation between 0νββ-decay NMEs and other nuclear observables such as two-neutrino double-beta decay, double Gamow-Teller and double-gamma transitions. In addition, I will discuss the potential of ordinary muon capture as a probe of 0νββ decay, and discuss the results of our recent muon-capture studies [3].
[1] J. Engel, J. Menéndez, Rep. Prog. Phys. 80 (2017) 046301.
[2] L. Jokiniemi, P. Soriano, and J. Menéndez, Phys. Lett. B 823 (2021) 136720.
[3] L. Jokiniemi, T. Miyagi, S. R. Stroberg, J. D. Holt, J. Kotila, and J. Suhonen, arXiv:2111.12992.
Speaker: Dr. Lotta Jokiniemi (Universidad de Barcelona)
FlipPhysics2022-Jokiniemi.pdf
Improved calculations on neutrinoless double-beta decay matrix elements 13m
Neutrinoless double-beta (0νββ) decay is a hypothetical nuclear process where two neutrons transmute into two protons, with only two electrons being emitted with no accompanying antineutrinos. The measurement of such a process would imply that neutrinos are Majorana particles (their own antiparticle) and, since lepton number would not be conserved, this would point to an event beyond the Standard Model of particle physics [1].
The 0νββ decay rate is governed by the nuclear matrix element [2]. Since no measurements are available for this process, we resort to methods of nuclear structure to calculate these magnitudes. In this case, our frame of work is the nuclear shell model, one of the most successful models for nuclear structure.
Using this model as our frame of work, we evaluate for the first time both the leading long-range and the newly acknowledged short-range contributions to the matrix element for the 0νββ decay of the nuclei most relevant for experiments [3].
In addition, we use shell model results to carry out, for the first time, more accurate calculations when combining them with ab initio quantum Monte Carlo results, which are able to capture additional correlations. We combine the nuclear shell model and quantum Monte Carlo approaches using the generalized contact formalism [4], and obtain improved results with respect to the standard shell model matrix elements.
[1] F.T. Avignone III, S.R. Elliott, J. Engel, Double beta decay, Majorana neutrinos, and neutrino mass, Rev. Mod. Phys. 80 (2008) 481.
[2] J. Engel, J. Menéndez, Status and future of nuclear matrix elements for neutrinoless double-beta decay: a review, Rep. Prog. Phys. 80 (2017) 046301.
[3] L. Jokiniemi, P. Soriano, J. Menéndez, Impact of the leading-order short-range nuclear matrix element on the neutrinoless double-beta decay of medium-mass and heavy nuclei, Physics Letters B 823 (2021) 136720.
[4] R. Weiss, P. Soriano, A. Lovato, J. Menéndez, R. B. Wiringa, Neutrinoless double-beta decay: combining quantum Monte Carlo and the nuclear shell model with the generalized contact formalism, arXiv:2112.08146.
Speaker: Mr. Pablo Soriano Fajardo (Universidad de Barcelona)
FlipPhysics Talk.pdf
Nucleosynthesis in the cosmos: The $^{26}$Al case 13m
Nucleosynthesis is an ongoing process in the cosmos which take place in various astrophysical environments such as massive stars, core-collapse supernovae or novae. One of the most famous example of evidence in the continuity of the process was the discovery of γ-ray from radioactive 26Al in 1982 [1]. More recently, an all-sky map of this characteristic 1809-keV γ-ray shows a distribution of 26Al in favor of massive stars and supernovae as the main progenitors [2]. Nevertheless, observational data are not enough to define precisely the source of production of 26Al and 14 to 29% of the total observed 26Al abundance are expected to have a nova origin [3].
In order to have a more precise picture of the different possible scenario, the 25Al(p, γ)26Si reaction has been studied in nuclear facilities. This reaction has a direct influence on the abundance of 26Al, by bypassing the 25Mg(p, γ)26Al reaction responsible of the production of the 26Al cosmic γ-ray emitter.
In this contribution, I'll present results which illustrate two complementary experimental domains: Mass measurement and gamma-ray spectroscopy. In 25Al(p, γ)26Si reaction, the proton capture is dominated by resonant capture to a few states above the proton threshold in 26Si. The mass value of 25Al and 26Si have an exponential contribution to the total resonant proton capture rate in 26Si. The mass of 25Al has been precisely determined via Penning traps measurement in the IGISOL facility at the university of Jyvaskyla in Finland [5]. Additionally, a recent experiment at Argonne National Laboratory in USA was performed to identify the resonant states in 26Si via γ-ray spectroscopy study using the unique GRETINA+FMA setup. This experiment came in complement to a recent spectroscopy study of the 26Si mirror nucleus, 26Mg, where a previously unaccounted l=1 resonance in the 25Al +p system was observed [5].
[1] W. A. Mahoney, J. Ling, A. Jacobson, and R. Lingenfelter, Astrophys. J. 262, 742 (1982).
[2] R. Diehl et al., Astron. and Astrophys., 298:445 (1995).
[3] M. B. Bennett et al., Phys. Rev. Lett. 111, 232503 (2013).
[4] L. Canete et al., Eur. Phys. J. A 52, 124 (2016).
[5] L. Canete et al., Phys. Rev. C 104, L022802 (2021).
Speaker: Mrs. Laetitia Cañete (University of Surrey)
FlipPhysics_talk_21032022_lcanete.pdf
Delving $\alpha$ and non-$\alpha$ structure beams induced incomplete fusion@ 4-7 MeV/A : A Role of Deformation 13m
Study of heavy-ion interactions using α and non-α structure beams at low energies [1-4] may provide a great deal of information on the in-complete fusion (ICF) reactions. In order to understand the dynamics of ICF reactions, several studies have been made and a large enhancement in cross section for α-emitting channels with respect to the calculations done with code PACE4[5] has been reported [3,7,8]. In heavy ion interactions at energies ≃ 4-7 MeV/A, using both the strongly as well as weakly bound projectiles. A substantial contribution of ICF fraction has been observed [6-8]. To under the systematic behavior in the enhancement of cross section for alpha emitting channels is still an open area of investigation. In this scenario, the role of deformation of the projectile and target nuclei in observed significant contribution is not well understood. Present work is focused to study the role of deformation [9] of the target nuclides in the incomplete fusion reactions at energies of interest, using alpha and non-alpha structure beams. In order to understand the role of the target deformation in ICF, fourteen reactions have been studied using beams of 12C, 16O, and 19F with various targets e.g., 93Nb, 103Rh, 115In, 159Tb, 165Ho, 169Tm, 175Lu and 181Ta. It has been observed that the incomplete fusion fraction increasing in an exponentially manner with the deformation (β2) of the target nucleus separately for each projectile. This systematic behavior of ICF fraction with the deformation parameter of the target nuclei has been used to develop an empirical relation. Further, analysis is in progress and results with details will be presented during the conference. The present work is supported by the Department of Science and Technology (DST), Delhi, India.
Speaker: Prof. Unnati Gupta (Amity University)
conf-spain-unnati.pdf
Study of $\alpha-$transfer reactions with $^7$Be in the context of nuclear astrophysics 13m
In stellar evolution, the rate of 12C(α,γ)16O reaction controls the C/O abundance ratio at the end of the helium burning phase, thus defining the further course of development. At stellar temperatures of around 300 keV, the cross section of 12C(α,γ)16O is ∼ 10−17 b, which cannot be measured using current technology. The α−capture reaction populating the natural-parity states of the residual nuclei, is an effective indirect tool for studying these types of reactions. In this case, it corresponds to the alpha pickup by 12C to populate states of 16O, predominantly the 6.917 MeV state. Loosely bound stable nuclei with prominent α−cluster structure, such as 6,7Li, 11B have also been used in such studies provided that these are "direct" α−transfer and do not proceed via a compound nucleus. However, the breakup contributions from such nuclei have a significant impact on the transfer channels. Interestingly, the 7Be nucleus, though having an α−cluster structure and a lower breakup threshold of 1.58 MeV, demonstrates lower breakup contribution compared to transfer cross section. In this context, we carried out an experiment at HIE-ISOLDE, CERN, with 7Be + 12C at E = 5 MeV/A to study α−transfer reactions populating states in 16O, that dominantly contribute to the He-burning process. Preliminary results would be presented.
Speaker: Mrs. Kabita Kundalia (Bose Institute, India)
FlipPhysicsWorkshop_Kabita.pdf
Searching for the nuclear Cooper pairs 13m
The pairing interaction induces nucleon-nucleon correlations that are essential in defining the properties of finite quantum many-body systems close to their ground states. A very specific probe of this pairing component in the nuclear interactions, which ties up nucleons in a highly correlated state, the nuclear Cooper pairs, is the two-nucleon transfer reactions. How paring correlations can be probed in heavy-ion collisions, is still an open question. Several experiments have been performed in the past, searching for signatures mainly via extraction of the enhancement coefficients, defined as the ratio of the actual transfer cross section and the prediction of the model using uncorrelated states. Unfortunately, experimental evidence of these factors is marred by the fact that all existing studies involve reactions at energies higher than the Coulomb barrier, where the reaction mechanism is the result of the interplay between nuclear and Coulomb interactions.
With the development of the new instrumentation, it nowadays became possible to measure the heavy-ion transfer reaction with high efficiency and good ion identification even at very low bombing energies where nuclei interact at large distances [1]. Multinucleon transfer reactions were measured in the 206Pb + 118Sn system at the INFN-LNL accelerator complex. The measurement has been performed in the inverse kinematic, by using the heavy 206Pb beam, and by detecting the lighter reaction fragments in the magnetic spectrometer PRISMA. The total cross sections of different transfer channels will be extracted in an energy range from above to well below the Coulomb barrier. By direct comparison of one- and two-nucleon transfer probabilities (one expects that the probability for the two-nucleon channel is proportional to the square of the single-particle one) we will extract the enhancement factors at the large distances. In the second stage, the experimental results will be compared with the state-of-the-art microscopical calculations which include correlations [2].
[1] Corradi, L., et al., J. Phys. G, 36 (2009) 113101.
[2] Montanari, D., et al., Phys.Rew.Lett., 113 (2014) 052501.
Speaker: Mrs. Josipa Diklić (Ruđer Bošković Institute)
Diklic_Josipa_flipPhysics.pdf
Constraining the nuclear equation of state 13m
Nuclear equation of state (EOS) describes the relationship between state variables such as density, pressure and temperature of a nuclear system. It is usually expressed as the energy per nucleon of a particular nuclear medium. Constraining parameters of nuclear EOS of asymmetric nuclear matter (where asymmetry lies in proton to neutron number) is of immense importance for understanding not just the properties of neutron-rich nuclei but also for the physics of neutron stars, mergers and other astrophysical phenomena. To accomplish this goal in terrestrial laboratories one must probe observables sensitive to changes in EOS parameters of exotic unstable nuclei which were for a long time experimentally unreachable. With the advent of radioactive ion beam facilities, the region further from the valley of stability became accessible.
An experiment with the aim of constraining the symmetry-energy slope L to ±15 MeV was held recently using large acceptance spectrometer R3B-GLAD at the GSI accelerator facility as a part of the FAIR Phase-0 campaign \cite{r3b}. Gathered data will be used to obtain total reaction, charge changing, total neutron-removal and total Coulomb-excitation cross sections along the tin isotopic chain for 124,128,132,134Sn. The objective behind the choice of these measurements lies in the existence of correlation between neutron-removal and Coulomb-excitation cross sections and the respective observables familiar for having a tight connection with the parameter L: neutron-skin thickness and the ground-state dipole polarizability \cite{tom, maza}. Stringent constraints on L will be derived from comparison of cross sections extracted from data with predictions of RMF calculations employing different energy density functionals.
[1] R3B-Collaboration, https://www.r3b-nustar.de/.
[2] T. Aumann, C. A. Bertulani, F. Schindler, and S. Typel, Phys. Rev. Lett., 119:262501, Dec 2017.
[3] X. Roca-Maza and N. Paar., Prog. Part. Nucl. Phys., 101:96–176, 2018.
Speaker: Mrs. Ivana Lihtar (Ruder Boskovic Institute)
lihtar_ivana_flip_physics.pdf
New lifetime measurements for the 2$_1^+$ level in $^{112,120}Sn by the Doppler-shift attenuation method 13m
The tin (Sn; Z = 50) isotopes constitute the longest chain of semi-magic even-even nuclei between the 100Sn (N = 50) and 132Sn (N = 82) double-shell closures, seven of which, 112,114,116,118,120,122,124Sn, are stable. These isotopes have become a prototypical benchmark of extensive microscopic theory and experiment, reflected in the large number of studies investigating the decay of their low-lying first-excited 2+ excited state. The transition characteristics are inferred through the B(E2; 0+g.s.→2+) values, which, in principle, are contingent on the lifetime of the corresponding level, and are the most direct and unambiguous test of the collective nature of the transitions.
There has been a considerable interest focused on the study of enhancement or suppression in collectivity of the excited 21+ state in the stable Sn isotopes. Independent experiments on Coulomb excitation, heavy-ion scattering and 21+ level lifetime measurements report discrepant transition probabilities, with the lifetime estimates indicating significantly reduced collectivity. A re-examination of the same has been carried out in the present work on two of the stable isotopes, 112,120Sn.
Low-lying levels in the 112,120Sn isotopes have been excited by inelastic scattering with heavy-ion beams. Level lifetime measurements have been carried out using the Doppler shift attenuation method, wherein the Doppler affected γ-ray peaks from the decay of the 21+ level in each isotope have been analyzed using updated methodologies, and corresponding B(E2; 0+g.s.→2+) values become indicative of the underlying collectivity. The present results are compared with existing estimates of the B(E2; 0+g.s.→2+) values in the stable Sn isotopes. The results are also found to be in good agreement with generalized seniority model as well as state-of-the-art Monte Carlo shell model (MCSM) calculations.
Speaker: Dr. Ananya Kundu (Tata Institute of Fundamental Research)
FLIP_2022_AK.pdf
Collinear Laser Spectroscopy and Fluorescence Detection 13m
Collinear laser spectroscopy provides access to many nuclear properties such as isotopic shifts of the nuclear mean square charge radii, spins, nuclear magnetic moments and electric quadrupole moments. As measurements are carried out on a small time scale, this method is well suited for the investigation of isotopes far from stability.
The development of many different techniques used in collinear laser spectroscopy has led to very small line widths of measured resonances (several 10MHz [1]). As these developments are always on going, additionally to the basic method new ideas for the fluorescence detection region of collinear laser spectroscopy apparatuses are presented and discussed.
[1] R Neugart et al 2017 J. Phys. G: Nucl. Part. Phys. 44 064002
Speaker: Mrs. Laura Renth (Institut für Kernphysik TU Darmstadt)
Renth-CoLaSpec.ppt
Welcome: Valencian Wine Tasting IFIC cafeteria
Tuesday, 22 March
Writing and speaking skills: Writing session Salón de Actos del IATA
Convener: Dr. Raquel Molina (Chair)
How to write an ERC proposal 1h
In this talk I will present my experience with the ERC grant application. I will share tips and tricks for the preparation phase, the proposal writing, and the interview. The talk will be based on my personal experience with the ERC Consolidator call 2020.
Speaker: Prof. Kathrin Wimmer (GSI-FAIR)
wimmer_FLIP.pdf
Writing and speaking skills: Public speaking session
Convener: Dr. Raquel Molina (Chair) (UV)
Writing skills for science outreach 1h
Speaker: Dr. Avelino Vicente (UV)
a_vicente.pdf
Public speaking skills for science 1h
Speaker: Prof. Isabel Cordero (UV)
public speaking.pdf
Gender equality in Science Salón de Actos del IATA
Convener: Prof. Mariam Tórtola (Chair) (IFIC-UV)
HORIZON EUROPE SEX & GENDER ANALYSIS IN RESEARCH 35m
Horizon Europe establishes Gender Equality as a cross-cutting principle and aspires to eliminate gender inequality and its intersection with other socio-economic inequalities through R&I systems, including and addressing unconscious biases and systemic structural barriers.
In order to achieve Gender Equality, the integration of the gender dimension into R&I content is mandatory and is a requirement set by default across all Work Programmes, destinations, and topics of Horizon Europe.
Addressing the gender dimension in research and innovation thus entails considering sex and gender in the whole R&I process: from the definition of the title to the methodology, the sample, the analysis, the language used and the dissemination of results.
The gender composition of the team and the existence of a Gender Equality Plan in the institution are tiebreaker and an eligibility criterium respectively.
Speaker: Prof. Capitolina Díaz (UV)
Capitolina Horizo Europe -Sex & G A in R.pptx
Gender Equality in Physics Salón de Actos del IATA
String theory and gender: a European experience 35m
In March 2013 the COST Action MP1210 The String Theory Universe was initiated for a duration of four years. The objectives were mainly scientific, but we were comitted to take a series of actions to address the problems that women that want to pursue a scientific career confront.
Given the huge imbalance in the area (only 15% of the Action members were women) we thought that the problems were severe and something had to be done.
In this talk I will speak about the initiatives that we took in order to make visible these problems to all of our colleagues and favour a change of perspective.
I think that our conclusions are still valid today.
Speaker: Prof. Mª Antonia Lledó (UV)
FlipPhysics 2022 Lledó.pdf
Gender Equality in Physics: Optics Salón de Actos del IATA
Convener: Prof. Mariam Tórtola (Chair) (UV)
Organization of gender-balanced events: a case of practice, National Meeting in Optics 2021 20m
Organization of gender-balanced events: a case of practice, National Meeting in Optics 2021
Speaker: Martina Delgado-Pinar, Vice Chair of the Women in Optics and Photonics Committee of SEDOPTICA, in representation of the organizing committee of RNO2021
https://www.rno2021.es/#comite-organizador
A clear example of the gender imbalance in STEM fields is the under-representation of women scientists in the most visible events (plenary and invited talks) at conferences and workshops. The phenomenon of all-male panels is not unusual, although it is true that, in recent years, they have been denounced by researchers themselves as a case of misconduct.
To overcome this barrier for women, a collective effort must be made by the entire scientific community. In this respect, the involvement and support of scientific societies and institutions is crucial in order to positively reinforce measures against gender bias in the organization of events. The example that will be presented in this contribution is the organization of the National Meeting in Optics 2021 (www.rno2021.es), which was carried out by the Women in Optics and Photonics Committee (MOF, for its acronym in Spanish) of the National Optical Society in Spain, SEDOPTICA (www.sedoptica.es).
SEDOPTICA approved in 2020 an internal code of conduct for its committees with a series of recommendations for the organization of gender-balanced events. This code of conduct was drafted and promoted by SEDOPTICA-MOF, and included aspects such as the ratio of men/women in invited and non-invited talks, scientific committees and the need to avoid the usual allocation of administrative roles to women while men hold the more visible and science-related positions. This code can be read in [1].
In 2021, the National Meeting in Optics (RNO) 2021 was organised by SEDOPTICA-MOF. It is a triennial congress organized by the SEDOPTICA, which has been held for more than 30 years. Each RNO brings together an average of 200 professionals from the different topics of Optics and Photonics in Spain and is where the latest scientific and technological advances in this field are presented. The 2021 organizing committee placed special emphasis on creating an equal and attractive congress for females and younger researchers.
To this end, the organizing committee wanted to highlight the role of women in Optics and Photonics, with a dedicated topic at the meeting, and a round table to discuss gender issues in scientific careers, with the participation of four leading women in research and industry. The plenary speakers were two world-leading researchers: Professor Jannick Rolland (University of Rochester) in visual science and imaging, and Professor Jelena Vucovick (Stanford University) in quantum and nonlinear optics. It is worth noting that these two women were delighted to participate in this national meeting, even when their schedules were difficult to fit into the meeting's timetable, and we are sure that the nature of the event was a reason for them to collaborate with us. Their talks were recorded and can be viewed at [2].
In addition, special care was taken to ensure a balanced ratio of male and female speakers at every session. Remarkably, it is worth noting that even in areas such as Optoelectronics, a committee that has a proportion of women below 20%, the proportion of female speakers was approximately 50%. Another example of positive action is that the participants in the competition for the best contribution by young researchers, RNO2021 award, showed an approximately 50% ratio between men and women. Even when there were no explicit criteria for including gender aspects in the evaluation of the contribution, there were three women among the five finalists in the context. These last three data indicate that the scientific level of female researchers is as good as that of their male counterparts. Hence, the usual argument relating the lack of women in representative positions in science to scientific reasons does not apply when women have the right conditions for their participation.
As the code of conduct approved by SEDOPTICA states, the imbalance between men and women in STEM fields is no reason to disregard the possibility of equal and diverse events maintaining a high scientific level. RNO2021 is an example of this. The crucial point is to get out of the usual comfort zone for the selection of speakers and, in the case of not directly knowing women in certain fields, just get out of your personal circle and ask other researchers for suggestions. There are more and more associations and initiatives that can help with this, so: take action!
[1]https://areamujersedoptica.wordpress.com/2020/07/10/documento-de-recomendaciones-a-los-comites-de-sedoptica-para-evitar-el-sesgo-de-genero/ , last visit 15/01/2022
[2] Prof. Jannick Rolland https://www.youtube.com/watch?v=MSzqeqh2DS4
Prof. Jelena Vucovick https://www.youtube.com/watch?v=EzhiOkpmGlc
Last visit 15/01/2022
Speaker: Dr. Martina Delgado-Pinar (University of Valencia)
FlipPhysics_MDP.pptx FlipPhysics_MDP_talk.pdf
Gender Equality in Physics: Physics and maternity Salón de Actos del IATA
Convener: Prof. Pas García (Chair) (UV)
Physics and Maternity Round table 50m
Motherhood has a huge impact on the careers of women scientists. With regards to the impact of family life on the work of male and female researchers, the evidence shown here indicates that having children clearly seems to be detrimental to a woman's career in science. For men, however, if family does have an effect on their work, this effect is more positive than negative. It seems to be evident, in light of the findings, that rearing children clearly interferes in the scientific productivity of women and the possibility of them being promoted to a higher level when their productivity is the same. This conflict between family and profession for women scientists is clearly shown in the distribution of male and female academics in Spain by family situation. The INE Human Resources Survey reveals that only 38% of women Full Professors have children, as opposed to 63% of men, and that the percentage of single women is 21% as opposed to 15% of single men.
Keynote speaker: Dr. Isabel Torres (co-founder and chief executive of "Mothers in Science"). Participants: Dr. Núria Garro (Dpt. Applied Physics, UVEG) and Dr. Susana Planelles (Dpt. Astronomy and Astrophysics, UVEG). Chair: Prof. Pas García (Dpt. Optics, UVEG).
Speakers: Dr. Isabel Torres (Mother in Science), Dr. Nuria Garro (UVEG), Dr. Susana Planelles (UVEG)
Gender Equality in Physics: Round table Salón de Actos del IATA
Convener: Inés Soler (UV)
Wednesday, 23 March
Machine learning Salón de Actos del IATA
Convener: Prof. Arantza Oyanguren (Chair) (IFIC-UV)
An introduction to Machine Learning in Particle Physics 35m
Speaker: Dr. Verónica Sanz (UV)
FlipPhysics_Sanz.pdf
Boost Radiation Hardness Assurance in your Space Mission with Machine Learning 15m
PRECEDER (Prediction of the Electrical Behavior of Electronic Devices under Radiation, Spanish acronym) is a new concept in the strategy of ensuring the radiation hardness in electronics, developed by our group. The idea is based on the use of archival data to assess the risk associated to radiation environments without irradiation testing needs. A critical step of Radiation Hardness Assurance (RHA) for space systems is given by the parts selection in concordance with the expected radiation effects. Radiation testing is the most decisive way of studying the radiation degradation. However, the increasing use of COTS (Commercial Off-The-Shelf) devices and the New Space challenges are pushing the need of finding new approaches to assess the risk associated to the radiation environment.
PRECEDER applies the methodology of Machine Learning searching the appropriated algorithm and finding solutions quality assessment. The development of this tool includes the search for optimal usage of the accumulated data, the search for learning methods, the analysis of application features and predict the behavior of EEE (Electrical, Electronic and Electro-mechanical) devices under radiation.
In this work, the methodology and application that has been established will be shown. The first successful results, obtained for specific devices and conditions, will be presented as a practical example.
Speaker: Mrs. Amor Romero Maestre (Centro Nacional de Aceleradores)
FlipPhysics_AmorRomero.pptx
Forecasting hazardous Geomagnetically Induced Currents for Spanish critical infrastructures by using AI 15m
In the last decades, our society has become more interdependent and complex than ever before. Local impacts can cause global issues, as the current pandemic clearly shows, affecting the health of millions of human beings. It is also highly dependent on relevant technological structures, such as communications, transport, or power distribution networks, which can be very vulnerable to the effects of Space Weather. The latter has its origin in solar activity and their associated events, such as solar flares and coronal mass ejections, which may provoke disturbances, interruptions, and even long-term damage to these technical infrastructures, with drastic social, economic and even political impacts. However, these phenomena and their effects are not yet well understood, and their forecast is still in the early stages of development. This talk will present our project, which uses a multidisciplinary approach, and which aims to deeply understand and develop an early warning system to evaluate the impact of violent solar storms on Spanish critical infrastructures such as the power transmission grid, railways, and oil and gas pipelines. Specifically, we are developing an advanced machine learning based predictive model of the impact of future solar storms on the ground. This model will consist of two distinct stages. First, we are using as input real-time data from the solar wind space probe ACE (located at the L1 point in space) to develop a deep learning model taking into account past conditions to predict the variation of the magnetic field on the Earth's surface at different locations in the Iberian Peninsula. Second, we will feed these local predictions of time-variation of the magnetic field into a physical model of the 3D Earth's geoelectrical structure to generate the geoelectrical fields that drive the geomagnetically induced currents (GICs). Thus, the ultimate goal is to provide a real-time prediction of the GICs from extreme geomagnetic storms on the Spanish critical infrastructures. This talks will show our latest results and our prospects in this field.
Speaker: Dr. Florencia Castillo (Heidelberg University)
solar_weather.pdf
Medical Physics Salón de Actos del IATA
Convener: Dr. Ana Ros (Chair) (IFIC)
The rise of precision medicine: the valuable contribution of medical physics 35m
Speaker: Dr. Irene Torres (Hospital La Fe)
MedicalPhys3_0_FlipPhysics_ITE_Mar2022(1).pptx
Applications of Machine learning in Medical Physics: Risks and Benefits 35m
In this talk, we will present the application of machine learning techniques to address many medical physics problems such as positron range correction in PET, dose estimation in radiotherapy planning, the guidance of ultrasound acquisitions, tissue segmentation, automatic lesion detection… We will focus on the risks and potential benefits of these new techniques compared to current standard methods. A summary of the most common challenges in the implementation of these techniques and how to overcome them will be also presented. In conclusion, machine learning tools have the potential to revolutionize all the areas of physics, providing solutions beyond what is currently possible, and being so new, it is a great field for young researchers.
Speaker: Prof. Joaquín López (UCM)
TALK_FLIPPHYSICS_JLHERRAIZ_2022.pdf
High-Gradient S-band Backward Travelling Wave Accelerating Cavity experiments at IFIC 15m
High gradient radiofrequency (RF) accelerating cavities are one of the main research lines in the development of compact linear accelerators. A particular focus of these structures is for medical hadron therapy applications. However, the operation of such cavities is currently limited by nonlinear electromagnetic effects that are intensified at high electric fields, such as dark currents and RF breakdowns. A new normal-conducting High Gradient S-band Backward Travelling Wave accelerating cavity for medical application (v=0.38c) was designed and constructed by the TERA Foundation in collaboration with CERN. This cavity is being tested at the IFIC High-Gradient (HG) Radio Frequency (RF) laboratory. The main goal of the tests is understanding which is the maximum achievable accelerating gradient of this new design and characterize the dark current and breakdown formation in the structure, which could limit the applicability of this technology. In this work, we present experimental measurements and simulation results characterizing the nonlinear effects of this new accelerating cavity and first conclusions about its applicability are discussed.
Speaker: Dr. Nuria Fuster (IFIC-CSIC)
FlipPhysics_Aceleradores_NFuster.pdf
Status of the PETALO project 15m
PETALO (Positron Emission TOF Apparatus with Liquid xenOn) is a new concept that seeks to demonstrate that liquid xenon (LXe) together with a SiPM-based readout and fast electronics, provide a significant improvement in the field of medical imaging with PET-TOF. Liquid xenon allows a continuous medium with a uniform response avoiding most of the geometrical distortions of conventional detectors based on scintillating crystals. PETit, the first PETALO prototype built at IFIC (Valencia), started operation in July 2021. It consists of an aluminum box with a unique volume of LXe and two planes of SiPMs that register the scintillation light emitted in xenon by the gammas coming from a Na22 radioactive source. After some months of data taking PETit is expected to demonstrate the potential of the technology, providing measurements of the most relevant features: reconstruction of the position, energy and time of the interactions.
Speaker: Mrs. Carmen Romo (IFIC)
FlipPhysics_Romo_slides.pdf
Quantum Computing Salón de Actos del IATA
Convener: Prof. Armando Perez (Chair) (UV)
Tensor Networks: from Quantum Information to Quantum Many-Body Physics and Quantum Field Theory 35m
The term Tensor Network (TN) States designates a number of ansatzes that can efficiently represent certain states of quantum many-body systems. In particular, ground states and thermal equilibrium of local Hamiltonians, and, to some extent, real time evolution can be numerically studied with TN methods. Quantum information theory provides tools to understand why they are good ansatzes for physically relevant states, and some of the limitations connected to the simulation algorithms.
While originally introduced in the context of condensed matter physics, where they have become a state-of-the-art technique for strongly correlated one-dimensional systems, in the last years it has been shown that TNS are also suitable to study lattice gauge theories and other quantum field problems.
Speaker: Prof. Mari Carmen Bañuls (Max Plank Institute of Quantum Optics)
QuantumSimulations_FlipPhysics_March2022.pdf
Lunch 1h 40m
Student session: Young female talents of the FRACE prizes 2021 Salón de Actos del IATA
Convener: Dr. Raquel Molina Peralta (Chair) (IFIC-UV)
Topological superconductivity and Majorana modes for quantum computation: a materials science perspective 35m
My name is Elsa Prada and I am a theorist with 20 year experience in condensed matter physics. I am interested in systems where quantum phenomena play an important role, such as low dimensional materials and nanostructures, and the technological applications we can derive from such quantum properties. This is nowadays dubbed the field of "Quantum Technologies". During my career I have worked on a diverse range of problems within condensed matter, including quantum information and entanglement based on superconducting heterostructures; electronic, spintronic and optoelectronic properties of two-dimensional crystals such as graphene, phosphorene or transition metal dichalcogenides; and more recently theory and applications of topological insulators and superconductors.
In this talk I will focus on my work in topological superconductors based on superconducting-semiconducting nanowires. These hybrid wires are by far the most explored (both theoretically and experimentally) and the most advanced candidates to achieve topological superconductivity. I will discuss the appearance of exotic emergent quasiparticles at the edges of these wires, called Majorana bound states or Majorana modes. These quasiparticles share properties with the fundamental particle Majorana fermion, but they possess non-trivial exchange statistics that turn them into anyons, which could make them useful candidates for quantum-bits, qubits, of future topologically protected quantum computers. I will summarize the advancements of the field during the last decade and the problems we still face to unambiguously create and detect Majorana modes in condensed matter systems.
Speaker: Prof. Elsa Prada (ICMM-CSIC)
ElsaPrada_CareerPath_Valencia.pdf
PET detectors, from benchtop to the clinics 35m
Positron Emission Tomography (PET) imaging constitutes the molecular imaging technique of excellence and is used to evaluate a radio-tracer uptake by an organ. To obtain PET images, patients are injected with radioisotopes that decay inside the patient body emitting a positron that subsequently annihilates with a core electron of the patient body, emitting two opposite 511 keV gamma-rays. PET detectors are optimized for the specific energy of 511 keV and their operation principle is based on opposed detectors measuring in coincidences these two emitted gamma-rays.
After complex image reconstruction processes a tomographic emission image is generated. To provide high quality images, in addition to the reconstruction process, PET detectors have to be carefully designed and optimized. Key elements are the scintillation block, the photosensor and the readout electronics.
In this talk, the design, optimization, and implementation of these components is reviewed, starting at the laboratory level, overviewing the PET scanner assembly, and finishing with their translation into the clinics.
Speaker: Dr. Andrea Gonzalez (Stanford University)
Contribution of the $\Delta(1232)$ resonance in the pion photoproduction on Carbon-12 6m
Speaker: Gustavo Guerrero (IFIC-UV)
Study of Exotic Hidden Heavy Flavor States 6m
In recent years, a great experimental effort has led to the discovery of some exotic states found in the charmonium and bottomonium spectra. Some examples of such states are the Zc(3900), Zc(4020), Zcs(3985), Zb(10610) and Zb(10650). These states do not fit the conventional qq¯ quark model given that they contain hidden-charm (cc¯) or hidden-bottom (bb¯) components, but they are also found to be charged. This implies a minimal structure of four valence quarks. Although there exist several exotic models which could describe these states, the molecular one is appealing due to the closeness of these states to the thresholds of some D(∗)D¯(∗) and B(∗)B¯(∗) channels. Within this framework and making use of SU(3) light flavor symmetry, we predict the masses and widths of additional Z states which remain to be seen in the experiment.
Speaker: Victor Montesinos Llacer (IFIC-UV)
Implementation of a software defined radio (SDR) based beam current monitor for Schottky detectors in heavy ion storage rings 6m
With the increasing sensitivity and precision of resonant Schottky detectors, this technology becomes more valuable in the determination of masses and lifetimes of the yet unstudied nuclei inside heavy ion storage rings but also in general storage ring physics. At present, information from these detectors is gained by high-end units with software and hardware interface that are not versatile and / or not suitable for applications where scalability is indispensable. Here, software-defined radio (SDR) based data acquisition systems come in handy, mainly due to their low cost and relatively simple hardware but also due to the fact that their functionality is almost entirely software-defined/programmable. If calibrated, Schottky detectors can facilitate beam current measurements that are orders of magnitude more sensitive compared to existing DC current transformers (DDCT). In this work, we report on the implementation of an SDR-based online beam current monitor for use with Schottky detectors in heavy ion storage rings such as ESR in GSI/FAIR.
Speaker: Mariia Selina (Aachen University of Applied Sciences)
Gender socialization and the absence of women in science 6m
In this presentation, we analyse how gender stereotypes influence the choice of professional career. In particular, we discuss how patriarchal social conditioning implies a lower presence of women in science. We depict possible measures to achieve greater equity in an area as masculinized as the scientific one.
Speaker: Aida Garrido (USal)
Advantages of Tomosynthesis for COVID-19 Detection with Artificial Intelligence 6m
Medical imaging has been one of the main tools employed during the COVID-19 pandemic for diagnosis and disease progression assessment. The most commonly used have been Chest X-Rays (CXR) and Computed Tomography (CT). However, CXR has a limited sensibility, while CT is more expensive, less accessible, gives more dose to the patients, and requires sanitizing the scanner after each patient acquisition. Tomosynthesis, which obtains X-rays images from a few source positions, has been proposed as a good compromise between both modalities.
The use of Artificial Intelligence (AI) tools to analyze medical images of COVID-19 patients has been proposed by many groups. It has been shown that Neural Networks (NN) can be trained to detect COVID-19 affections accurately provided enough cases are available. Nevertheless, while many public databases of CXR and CT images of COVID-19 patients have been generated worldwide, there is a lack of databases of tomosynthesis images, which makes it difficult to train a NN for this modality.
In this work we propose to use the existing CT and X-ray databases to perform realistic simulations and generate X-Ray tomosynthesis images. We made use of a database containing 200 CT images of COVID-19 patients, along with the segmentations of the lung affected region. Projections at 0⁰ and ±15⁰ were simulated in an in-house developed, GPU-accelerated, ultrafast Monte Carlo (MC) code. Two NN were trained to detect whether each lung is affected by COVID-19 or not: the first one is defined with one input channel corresponding to the 0⁰ projection (which corresponds to a standard CXR), while the other one employs three input channels corresponding to 0⁰ and ±15⁰ projections (which corresponds to a simplified tomosynthesis acquisition). Results show that the three-channel NN outperforms the one-channel NN. Despite the limited number of cases used in this work, and the reduced number of projections, the results are very promising, and motivates further research on the advantages which can be obtained with Tomosynthesis.
Speaker: Clara Freijo Escudero (UCM)
Neural networks for reconstruction of the underlying kinematics in high energy collisions 6m
The parton-level kinematics plays a crucial role for understanding the internal structure of hadrons and improving the precision of the calculations. To better understand the kinematics at the partonic level, we study the production of one hadron and a direct photon, including up to Next-to-Leading Order Quantum Chromodynamics and Leading-Order Quantum Electrodynamics corrections. Using a code based on Monte-Carlo integration, we simulate the collisions and analyze the events to determine the correlations among measurable and partonic quantities. Then, we use these results to apply Machine Learning algorithms that allow us to find the momentum fractions of the partons involved in the process, in terms of suitable combinations of the final state momenta.
Speaker: David Francisco Rentería Estrada (Universidad Autónoma de Sinaloa)
Poster_David.pdf
Cabibbo suppressed single pion production off the nucleon induced by antineutrinos 6m
In this work we study the Σπ and Λπ production off free nucleons driven by the strangeness-changing weak charged current. We calculate the total cross sections for all possible channels and estimate the flux-averaged total cross sections for experiments like MiniBooNE, SciBooNE, T2K, and Minerva. The model is based on the lowest order effective SU(3) chiral Lagrangians in the presence of an external weak charged current and contains Born and the lowest-lying decuplet resonant mechanisms that can contribute to these reaction channels. We also compare and discuss our results with others following similar and very different
approaches.
Speaker: Maria Benitez Galan (UGR)
Dark matter gamma-ray signals in the Milky Way: brightest dark satellites versus diffuse galactic emission" 6m
Speaker: Sara Porras Bedmar (UAM)
Poster1.pdf
Core-collapse supernovae from red super giant stars 6m
Supernova (SN) explosions are one of the most energetic events in the observable universe.
Given that, they are the best natural laboratories to investigate extreme physical phenomena, that otherwise would not be reproducible on Earth.
During these powerful explosions chemical elements are also produced, that go to enrich the amount of heavy elements in the interstellar medium.
Three-dimensional long-time simulations of core-collapse supernovae (CCSNe) are crucial to better understand the connection between the progenitor star and the supernova remnants.
These studies have been performed using mainly two approaches: (i) a detailed 3D analysis of individual events, e.g. SN 1987A (M\"uller et al. 1991; Orlando et al. 2015, 2020), or (ii) 1D surveys of stars with different masses and initial conditions (Ugliano et al. 2012; Sukhbold et al. 2016; Ertl et al. 2020).
Here, we intend to extend the current 3D models in the fashion of the latter 1D simulations, considering SNe originated by different red super giant (RSG) progenitors with zero-age main-sequence (ZAMS) masses between 12.5M⊙ and 27M⊙.
We first study two stars with MZAMS=19.8M⊙ and MZAMS=25.5M⊙.
The first one shows an approximate spherical symmetry in the first stages of the explosion, and asymmetries start to rise only later on.
An interesting case is instead the second model: it shows a peculiar evolution, where the explosion mainly develops on one plane, and it is starting to present structures that recall supernova remnant Cassiopeia A.
This case surely requires further investigation, but having this kind of formations so early in the evolution is really promising.
CCSN simulations are a precious resource for investigating explosion mechanisms and features of the ejecta distribution.
Moreover, from the computational results it is possible to infer some observational properties that can be used to characterize a physical source and retrieve information on its progenitor star.
Speaker: Beatriz Giudice (UV)
FlipPhysics.pdf
NEXT-100 status and prospects 6m
NEXT (Neutrino Experiment with a Xenon TPC) is a double beta decay experiment located in Huesca (Spain) at the Laboratorio Subterraneo de Canfranc (LSC). It searches for the neutrino-less double beta decay (ββ0ν) of 136Xe, a lepton-number-violation process that would prove the Majorana nature of neutrinos and eventually provide handles for a measurement of the neutrino absolute mass. The latest stage of the experiment finished in summer 2021 with the decommissioning of the NEXT-White detector. NEXT-White proved the outstanding performance of the NEXT technology in terms of the energy resolution (<1% FWHM at 2.6 MeV) and the topology-based background rejection. NEXT-White has also measured the relevant backgrounds for the ββ0ν search using both 136Xe-depleted and 136Xe-enriched xenon. The following stage of the experiment is the NEXT-100 detector, currently under construction. This large scale detector will hold ~100 kg of 136Xe with a background index below 5×10−4 counts/keV/kg/year and will perform the first competitive ββ0ν search within NEXT. As validated with NEXT-White, NEXT-100 will reach a sensitivity to the half-life of 6×10^25 y after 3 years of data taking, paving the way for future ton-scale phases. In this poster, I will present an overview of the status of the construction, screening program and sensitivity predictions for our NEXT-100 detector.
Speaker: Miryam Martinez Vara (DIPC - IFIC)
posterFLIP.pdf
Calibrating the ANAIS-112 dark matter experiment with neutrons 6m
ANAIS (Annual modulation with NaI Scintillators) is a direct dark matter detection experiment whose goal is to confirm or refute in a model independent way the highly controversial positive annual modulation signal reported by DAMA/LIBRA collaboration for more than twenty cycles. ANAIS-112, consisting of 112.5 kg of NaI(Tl) scintillators, is presently in data taking phase at the Canfranc Underground Laboratory, in Spain, since August 2017. The dark matter interpretation of the modulation signal depends critically on a complete understanding of the detector response to nuclear recoils, which are expected to be induced via elastic scattering of dark matter particles off target nuclei in many of the models considered for such dark matter particles. It is well known that the light output from nuclear recoils is reduced with respect to electrons depositing an equivalent energy by the quenching factor, a parameter which is actually not well known for NaI(Tl) scintillators. Not only recent measurement on the quenching factor of sodium showed significantly different results, but also very few measurements on the quenching factor of iodine have been performed up to now. This magnitude is usually determined by measurements in a monoenergetic neutron beam, requiring small scintillating crystals to avoid multiple scattering. On the other hand, the study presented here relies on a different approach, aiming at the evaluation of the quenching factor by exposing directly the large ANAIS-112 crystals to neutrons from a Cf-252 source. For this purpose, detailed Monte Carlo simulations of the full experimental set-up are required, which should be checked against the experimental measurement. Comparison between measurement and simulation allows testing different quenching factor models and following a best-fit strategy. Moreover, this simulation could be also exploited to improve the ANAIS-112 event selection procedure, helping to identify nuclear recoils-dominated regions and to design an efficiency calibration procedure.
Speaker: Tamara Pardo Yanguas (UZ)
Poster_TamaraPardo_FlipPhysics22.pdf
A probabilistic approach to the hierarchy problem 6m
In this work, we provide a simple model that studies the probability to obtain a given hierarchy between two scales. In particular, we work in a theory with a light SU(2)L sector and a heavy SU(2)H sector, and two scalar doublets with each one corresponding to one sector. Furthermore, both sectors can interact by means of a U(1)X. By the Coleman-Weinberg mechanism, the gauge bosons and scalars obtain different masses. We analyze the mass ratio of these sectors in order to discuss the hierarchy between them, and we define a probability associated to this hierarchy. We study different cases in which one of the sectors is fixed or both of them have free parameters, and also consider the effect of including an interaction between them. We conclude that the probability of obtaining very large hierarchies is (logarithmically) small but not negligible. In this toy model some interesting situations are provided, for example, our result could be applied to a theory with a known low-energy sector and an additional weakly-interacting heavy dark sector.
Speaker: Clara Alvarez Luna (UCM)
Poster_Clara_Álvarez_Luna_FlipPhysics.pdf
CP violation in hadronic two-body D meson decays: a SM calculation 6m
In 2019 the LHCb experiment discovered for the first time a clear signal of direct CP violation in the charm sector, in particular in the decays of D0 mesons to π+π− and K+K−. However, the theoretical determination of the strong part of the related decay amplitudes in the SM remains uncertain, mainly due to the difficulties when dealing with charmed hadronic asymptotic states. A long-known tool for assessing such amplitudes is dispersion relations. These arise from fundamental properties of the S-matrix elements and are data driven at large . Although they are easily understood and deployed in elastic channels, they become much more complicated when inelasticities are present. In this work we extract the CP-even and odd D→ππ/KK amplitudes within the SM, analysed in the isospin basis and with the use of unitarity and large number-of-colours expansion, by performing global fits to the current experimental data. Moreover, we implement novel numerical methods for dispersion relations in the inelastic isospin-0 channels.
Speaker: Eleftheria Solomonidi (IFIC-UV)
Long-lived heavy neutral leptons at the LHC: probing $N_R$SMEFT operators. 6m
Interest in searches for heavy neutral leptons (HNLs) at the LHC has increased considerably in the past few years. In the minimal scenario, HNLs are produced and decay via their mixing with active neutrinos in the Standard Model (SM) spectrum. However, many SM extensions with HNLs have been discussed in the literature, which sometimes change expectations for LHC sensitivities drastically. In the $N_R$SMEFT, one extends the SM effective field theory with operators including SM singlet fermions, which allows to study HNL phenomenology in a "model independent" way. Within the framework of $N_R$SMEFT, we study the sensitivity of ATLAS to HNLs for four-fermion operators with a single HNL. These operators might dominate both production and decay of HNLs, and we find that new physics scales in excess of 20 TeV could be probed at the high-luminosity LHC.
Speaker: Rebeca Beltrán Lloria (IFIC-UV)
PosterHNL_RebecaBeltran.pdf
Student session
Career path of Paula Tuzón, physicist and actual secretary of climate emergency of the GVA 25m
Speaker: Dr. Paula Tuzón (GVA)
Perspective of how is working outside academia 25m
Speaker: Dr. Gaetana Anamiati (DNV)
Gaetana Anamiati.pptx
Student session: Discussion with researchers at IFIC and UV Salón de Actos del IATA
Discussion with researchers at IFIC and UV 50m
Speakers: Dr. Anabel Morales (IFIC), Dr. Emma Torró (IFIC), Dr. Gabriela Barenboin (UV), Prof. Mariam Tórtola (UV), Dr. María Moreno Llácer (UV), Prof. Olga Mena (IFIC-CSIC), Dr. Raquel Molina (UV), Dr. Sonja Orrigo (IFIC), Dr. Valentina De Romeri (IFIC)
kahoot_round_table_230322.pdf
Picaeta Conciencia 1h 30m IFIC Cafeteria
Thursday, 24 March
Dark matter Salón de Actos del IATA
Convener: Dr. Valentina De Romeri (Chair) (IFIC-UV)
Indirect detection of dark matter: status and perspectives 35m
Unveiling the nature of dark matter is one of the major endeavors of our century.
The search for dark matter is developed across multiple channels and with different techniques.
In particular, indirect searches aim at disentangling dark matter signals above the largely dominant astrophysical background in the flux of cosmic particles, such as charged cosmic rays and gamma rays. Limits on the dark matter parameter space, and, even more, detection of tentative signals crucially depend on our understanding of the astrophysical background. I will discuss what are the main astrophysical ingredients of relevance for dark matter indirect detection and how they impact the current limits on dark matter particle models.
I will finally provide some prospects for future observations.
Speaker: Dr. Francesca Calore (CNRS)
FlipPhysics_FCalore.pdf
Probing the nature of dark matter with gamma rays 35m
Speaker: Prof. Gabriela Zaharijas (University of Nova Gorica (CAC))
2022-3-FlipPhysics-Zaharijas-v2.pdf
Convener: Dr. Valentina De Romeri (Chair) (IFIC)
Experimental status and perspectives on dark matter direct detection and latest ANAIS results 35m
Understanding the nature of the Dark Matter has shown to be one of the biggest challenges faced in the XXI century by Cosmology, Astrophysics and Particle Physics. It will require following complementary approaches. Among them, dark matter direct detection strategy has developed since the eighties of the past century, increasing strongly the detection sensitivity by introducing new detection techniques, ultra-low radioactive background techniques and powerful background rejection strategies. Experimental results are compatible with estimated backgrounds in general, but DAMA/LIBRA observation of an annual modulation in the detection rate compatible with that expected for dark matter particles from the galactic halo is one of the most puzzling results in the present particle physics scenario.
In this talk, we will review the present status of the direct detection searches of dark matter in general and, in particular, in the testing of the DAMA/LIBRA result, focusing on experiments using the same target material: sodium iodide. The talk will cover in more detail the performance and prospects of ANAIS-112 experiment, which using 112.5 kg of NaI(Tl) as target, is taking data at the Canfranc Underground Laboratory in Spain since August 2017
Speaker: Prof. Mª Luisa Sarsa (Universidad de Zaragoza)
ANAIS112_FlipPhysics.pdf ANAIS112_FlipPhysics.pptx
Astroparticle Salón de Actos del IATA
Convener: Prof. Olga Mena (Chair) (IFIC)
Neutrinos in cosmology and astroparticle physics 35m
Speaker: Dr. Ninetta Saviano (JGU Mainz)
flip_valencia_saviano.pdf
Neutrino Experiments 35m
The combined result of a number of experiments demonstrated that neutrinos have mass and oscillate, and experimentalists have made enormous progress in measuring neutrino properties. However fundamental questions about neutrinos remain: Is the neutrino its own antiparticle? What is the absolute scale of neutrino masses? How are the three neutrino mass states ordered from lightest to heaviest (neutrino "mass ordering")? Is the CP symmetry violated in the neutrino sector? Are there sterile neutrino species in addition to the three active ones participating in the weak interactions? Current and future neutrino experiments are designed with state-of-the-art technology to provide answers to these questions.
Speaker: Dr. Clara Cuesta (CIEMAT)
Neutrinos_CCuesta.pdf
Gravitational waves Salón de Actos del IATA
Convener: Prof. Alicia Sintes (Universitat de les Illes Balears)
Gravitational waves: observations and mathematical aspects 35m
In this talk I will present a brief overview of the current gravitational wave detections and some of the most important consequences we can derive. I will also mention the plans for the forthcoming observation runs. In the last part of the talk I will comment on how mathematics can contribute in the field of gravitational wave astronomy, focusing on formulations of General Relativity, numerical simulations and data analysis.
charla isa.pdf
Cosmology Salón de Actos del IATA
Early Universe Cosmology: how to co-generate Dark Matter and the Baryon asymmetry 35m
Speaker: Prof. Laura Covi (University of Goettingen)
Cogenesis_FlipPhysics2022.pdf
Cosmological tensions 35m
The Cosmic Microwave Background temperature and polarization anisotropy measurements have provided strong confirmation of the LCDM model of structure formation. Even if this model can explain incredibly well the observations in a vast range of scales and epochs, with the increase of the experimental sensitivity, a few interesting tensions between the cosmological probes, and anomalies in the CMB data, have emerged with different statistical significance. While some portion of these discrepancies may be due to systematic errors, their persistence across probes strongly hints at cracks in the standard LCDM cosmological scenario. I will review these tensions, showing some interesting extended cosmological scenarios that can alleviate them.
Speaker: Dr. Eleonora di Valentino (Sheffield University)
divalentino.pdf
Precession in black hole binary systems: toward calibrating precessing phenomenological waveform models to numerical relativity 13m
Since 2015 the international advanced gravitational wave detector network has confidently detected tens of short transient signals, whose sources have been identified as mergers of compact objects, primarily binary systems of black holes. The main goal of this talk will be to discuss the phenomenon of precession in black hole binaries, as well as the first steps to further improve its description towards the next observational run, which will finally achieve design sensitivity for the LIGO and Virgo detectors. Binary black holes systems span a parameter space of nine intrinsic parameters: two spin vectors, the mass ratio, and two parameters associated with eccentricity. When the black hole spins are orthogonal to the orbital plane, there exists an equatorial symmetry of the spacetime that is preserved in time, and so are the spin directions and the orbital plane itself. The parameter space for these systems, referred to as non-precessing, reduces considerably. This is no longer true when the spins are misaligned with the orbital angular momentum: the spin-orbit and the spin-spin couplings induce a precessing motion of the orbital plane and spins, which breaks all the symmetries. Further, precession leads to a complex modulation of the signal which becomes hard to model due to the high dimensionality of the problem. This phenomenon can be simplified by using an approximate map between precessing signals in a non-inertial co-precessing frame and non-precessing signals. This approach is often called the "twisting-up approximation" and has typically been used in phenomenological waveform models. In this talk, we will discuss the main caveats of the approximation and the preliminary steps towards calibrating precession to numerical relativity simulations. These efforts may become essential to improve the accuracy of the current (fourth) generation of phenomenological waveform models developed in our group.
Speaker: Mrs. Maria del Lluc (UIB)
FlipPhysics_Planas22.pdf
Searching for long-duration transient gravitational waves from glitching pulsars using Convolutional Neural Networks 13m
Pulsars are spinning neutron stars which emit an electromagnetic beam. We expect pulsars to slowly decrease their rotational frequency due to the radiation emission. However, sudden increases of the rotational frequency have been observed from different pulsars. These events are called "glitches", and are followed by a relaxation phase with timescales from days to months. Gravitational-wave (GW) emission may follow these peculiar events, including long-duration transient continuous waves (tCWs) lasting hours to months. These are modeled similarly to continuous waves but are limited in time. Previous studies have searched for tCWs from glitching pulsars with matched filtering techniques and by computing a detection statistic, the F-statistic, maximized over a set of transient parameters like the duration and start time of the potential signals. This method is very sensitive, but the computational costs can easily increase when widening the frequency and spindown search bands and the duration of the potential signals.
In order to reduce computational and human effort, we present a procedure for detecting potential tCWs using Convolutional Neural Networks (CNNs). CNNs have proven to be valid networks for detecting various CW signals, but have never been tested on tCWs from glitching pulsars. For our initial configuration, we train the CNN on F-statistic "atoms", i.e. quantities computed during the matched filtering step from signal/noise data. This still constrains the frequency evolution of the signal to be CW-like, but already allows for flexible amplitude evolution and significant speed-up compared to the traditional method. In the future, we also plan to implement a second CNN with input the frequency-time maps, which in this case can search for unmodeled tCWs both in frequency and amplitude evolution, which we expect to be a further improvement to the speed and performance of the search.
Speaker: Mrs. Luana Modafferi (Universitat de les Illes Balears)
presentation_FlipPhysics_LMM.pdf
Interference signatures in the gravitational lensing of gravitational waves 13m
When gravitational waves propagate near massive objects, they are deflected as a result of gravitational lensing. This phenomenon is already known for electromagnetic waves, and it is expected for gravitational waves to be a promising new instrument in astrophysics. When the time delay between the different paths is comparable with the wave's period, interference and diffraction appear due to lensing, and they are imprinted in the waveform, as a "beating pattern". These effects are likely to be observed near the caustics, but the short-wave asymptotics associated with the geometrical optics approximation breaks down close to the caustic, where wave optics should be used. In this talk I will describe the crossover from wave optics to geometrical optics for the point mass lens model, where two parameters – the angular position of the source respect to the caustic, and the Fresnel number, which is the ratio between the Schwarzschild radius and the wavelength – are used to characterize the lensing effect. We obtain an interference pattern for the transmission factor, which allows us to suggest a simple formula for the onset of geometrical-optics oscillations which relates the Fresnel number with the angular position of the source in units of the Einstein angle
Speaker: Mrs. Helena Ubach (Universitat de Barcelona, ICCUB)
2022-03-24_FlipPhysics_Valencia_definitive_Ubach.pdf
Thermal gravitational wave emission from Holography in strongly-coupled theories 13m
There is a potentially detectable background of stochastic gravitational waves produced by thermal sources in the Universe. In this work, we provide the first computation of the gravitational-wave spectrum emitted by a thermal plasma in a strongly-coupled theory: strongly-coupled N=4 Super Yang Mills. Given the non-applicability of perturbative methods in strong coupling computations, we resort to gauge/string duality to obtain the shape of the spectrum. We later compare it with the analogue spectrum derived from the perturbative analysis in weakly-coupled Super Yang Mills. The convolution of both spectra with the expansion of the Universe provides the stochastic background of thermal gravitational waves that is present in the Universe. This work aims to mark the beginning in the study of the thermal emission from strongly-coupled cosmological sources, what could be relevant in the research of dark matter and other cosmological implications.
Speaker: Mrs. Lucía Castells (Universitat de Barcelona)
FlipPhysics_Lucia_Castells.pdf
Parameter estimation of gravitational wave events with state-of-the-art phenomenological waveform models in the frequency and the time domain. 13m
In this talk, we present a re-analysis of different black hole merger gravitational wave events detected by the LIGO and Virgo interferometers with state-of-art phenomenological waveform models, IMRPhenomX and IMRPhenomT, which include higher spherical harmonics and spin precession. Due to their rapid and accurate evaluation of the waveforms, but also an automatisation of our Bayesian inference runs, we test the waveform model families, the improvements in the precession treatment, non-informative priors, and different sampler settings and codes. In most of the studied events, the influence of higher modes is small, unless it is a massive event. In this case, IMRPhenomT further improves the fit to the data over IMRPhenomX owing to dropping the SPA approximation and other improvements in the waveform modeling. The prior choices also play an important role in challenging short signals.
Speaker: Mrs. Maite-Lucena Mateu (University of the Balearic Islands)
FlipPhysic_Mar2022_MateuLucena.pdf
Particle Physics Salón de Actos del IFIC (PCUV)
Salón de Actos del IFIC (PCUV)
Convener: Dr. Emma Torró Pastor (chair) (IFIC)
Constraining the absolute neutrino mass via time-of-flight measurements of the Supernovae electron neutrinos with DUNE. 13m
Supernova (SN) explosions are the most powerful cosmic factories of all-flavors, MeV-scale, neutrinos. Their detection is of great importance not only for astrophysics, but also to shed light on neutrino properties. Since the first observation of a SN neutrino signal in the 1987, the international network of SN neutrinos observatories has been greatly expanded, in order to detect the next galactic SN explosion with much higher statistics and accuracy in the neutrino energy-time-flavor space. The Deep Underground Neutrino Experiment (DUNE) is a proposed leading-edge neutrino experiment, planning to begin operations in 2026. DUNE will have capability to extract precious information about SN neutrinos. In this contribution, I will discuss the constraints that we expect to achieve with DUNE on the absolute value of the neutrino mass, obtained by considering the time delay in the propagation of massive electron neutrinos from production in the SN environment to their detection in DUNE. Furthermore, the comparison of sensitivities achieved for the two possible neutrino mass orderings is discussed, as well as the effects due to propagation in the Earth matter.
Speaker: Federica Pompa (IFIC Valencia)
1_FlipPhysics_FP.pdf
Extending the Reach of Leptophilic Boson Searches at DUNE and MiniBooNE with Bremsstrahlung and Resonant Production 13m
New gauge bosons coupling to leptons are simple and well-motivated extensions of the StandardModel. We study the sensitivity to gauged L_\mu−L_e, L_e−L_\tau and L_\mu−L_\tau both with the existing beam dump mode data of MiniBooNE and with the DUNE near detector. We find that including bremsstrahlung and resonant production of Z^\prime which decays toe±andμ±final states leads to a significant improvement in existing bounds, especially for L_\mu−L_e and L_e−L_\tau for DUNE while competitive constraints can be achieved with the existing data from the MiniBooNE's beam dump run.
Speaker: Mr. Francesco Capozzi (IFIC)
talk_Capozzi.pdf
Visible final-state kinematics in $b \to c\tau( \pi\nu_\tau, \rho\nu_\tau, \mu\bar{\nu}_\mu\nu_\tau)$ reactions 13m
In the context of lepton flavor universality violation (LFUV) studies, we study different observables related to the b→cτν¯τ semileptonic decays. These observables are expected to help in distinguishing between different NP scenarios. Since the τ lepton is very short-lived, we consider three subsequent τ-decay modes, two hadronic πντ and ρντ and one leptonic μν¯μντ, which have been previously studied for B¯→D(∗) decays. This way the differential decay width can be written in terms of visible (experimentally accessible) variables of the massive particle created in the τ decay.
There are seven different τ angular and spin asymmetries that are defined in this way and that can be extracted from experiment. In addition to these asymmetries, we study the d2Γd/(dωdcosθd), dΓd/dcosθd and dΓd/dEd distributions.
We present numerical results for the Λb→Λcτν¯τ semileptonic decay, which is being measured with precision at the LHCb.
Speaker: Mrs. Neus Penalva (IFIC)
FlipPhysics_NeusPenalva.pdf
Flavour Symmetry & Neutrino Masses 13m
An extra-dimensional extension of the standard model is presented. It displays a flavor A4 symmetry among the three generations of fermions at the high energy regime. The model offers a symmetrical origin to quark and lepton mixings in a unified framework. The neutrino masses in the model emerge at one loop in a scotogenic fashion. The minimalist set up of the model is highly predictive and includes a dark sector whose lightest particle can be identified as a dark matter candidate.
Speaker: Omar Medina (IFIC-UV)
BeamerFlipPhysicsOmarMedina2022.pdf
Dark sector searches with Na64 experiment at CERN 13m
The existence of dark sectors is an exciting possibility to explain the origin of Dark Matter (DM). In addition to gravity, DM could interact with ordinary matter through a new very weak force. This new interaction could be mediated by a new massive vector boson, called dark photon (A'). If A' exists, it could be produced through the kinetic mixing with a bremsstrahlung photon from a high-energy electron scattering in a target. A' could then decay invisibly into light DM particles, A′→χχ, or visibly, into e+e-. Searching for the former in events with large missing energy allows us to probe the γ−A′ mixing strength and the parameter space close to the one predicted by the relic dark matter density. Motivation for searching visible decays, has been recently enhanced by the anomaly observed in the 8Be and 4He nuclei transitions that could be explained by the existence of a 17 MeV boson also decaying into e+e-. In this talk, we present the NA64 results from the combined 2016-2018 data analysis for visible and invisible modes. The experiment resume data taking in 2021. The latest results and the future prospects will be also covered in this talk. Finally, the new NA64 muon program, exploring dark sectors weakly coupled to muons will also be presented.
Speaker: Dr. Laura Molina Bueno (IFIC-UV)
molina_FLIP_24032022.pdf
Signatures of primordial black hole dark matter at DUNE and THEIA 13m
Primordial black holes (PBHs) are potential dark matter candidates whose masses can span over many orders of magnitude. If they have masses in the 1015−1017 g range, they can emit sizeable fluxes of MeV neutrinos through evaporation via Hawking radiation. We explore the possibility of detecting light (non-)rotating PBHs with future neutrino experiments. We show that future neutrino experiments like DUNE and THEIA will be able to set constraints on PBH dark matter, thus providing complementary probes in a part of the PBH parameter space currently constrained mainly by photon data.
Speaker: Mr. Pablo-Miravé Martínez (IFIC)
FlipPhysics_MartinezMirave.pdf
Axion quality from the symmetric of SU(N) 13m
The Peccei-Quinn solution to the strong CP problem has a problematic aspect: it relies on a global U(1) symmetry which, although broken at low energy by the QCD anomaly, must be an extremely good symmetry of high-energy physics. This issue is known as the Peccei-Quinn quality problem. We propose a model where the Peccei-Quinn symmetry arises accidentally and is respected up to high-dimensional Planck-suppressed operators. The model is a SU(N) dark gauge theory with fermions in the fundamental and a scalar in the symmetric. The axion arises from the spontaneous symmetry breaking of the gauge group and the quality problem is successfully solved for large enough number of dark colors N. The model includes additional accidentally stable bound states which provide extra Dark Matter candidates beyond the axion.
Speaker: Dr. Giacomo Landini (IFIC-UV)
AxionQualityFLIP2022.pdf
Sensitivity of CTA to gamma-ray emission from the Perseus galaxy cluster 13m
We estimate the sensitivity of the Cherenkov Telescope Array (CTA) to detect diffuse gamma-ray emission from the Perseus galaxy cluster, both from interactions of cosmic rays (CR) with the intra-cluster medium, or as a product of annihilation or decay of dark matter (DM) particles in case they are weakly interactive massive particles (WIMPs). The observation of Perseus constitutes one of the Key Science Projects proposed by the CTA Consortium for the first years of operation of the CTA Observatory. In this talk, we will focus on the DM-induced component of the flux. Our DM modeling includes the substructures we expect in the main halo of Perseus, as predicted within the standard cosmological model hierarchical structure formation scenario, which will boost the annihilation signal significantly. We compute the expected CTA sensitivity using a likelihood maximization analysis including the most recent CTA instrument response functions. We also model the expected CR-induced gamma-ray flux in the cluster, and both DM- and CR-related uncertainties via nuisance parameters. We will show the sensitivity of CTA to discover, at best, diffuse gamma-rays in galaxy clusters for the first time. Even in absence of signal, we show that CTA will allow us to provide stringent and competitive constraints on TeV DM, that will rely on state-of-the-art modeling of the cluster's DM distribution. Finally, we will discuss the optimal strategy for CTA observations of Perseus.
Speaker: Mrs. Judit Pérez (IFT UAM)
flipphysics_perseus_CTA_JPR_v2.pdf
Dark Matter search in dwarf irregular galaxies with Fermi -LAT 13m
In these talk we highlight the main results about dark matter (DM) search in dwarf irregular galaxies with the Fermi Large Area Telescope. We analyze 11 years of Fermi-LAT data corresponding to the sky regions of 7 dwarf irregular (dIrr) galaxies. DIrrs are DM dominated systems, recently proposed as interesting targets for the indirect search of DM with gamma-rays. We create a spatial template of the expected DM-induced gamma-ray signal with the CLUMPY code, to be used in the analysis of Fermi-LAT data. No significant emission is detected from any of the targets in our sample. Thus, we compute the upper limits on the DM annihilation cross-section versus mass parameter space. The strongest constraints are obtained for 𝑏𝑏 ̄ and are at the level of ⟨𝜎𝑣⟩ ∼ 7 × 10−26cm3s−1 at 𝑚𝜒 ∼ 6 GeV.
Speaker: Mrs. Viviana Gammaldi (DFT & IFT UAM)
dIrr_Fermi_IBS_IFT_FlipPhysics_v1.pdf
Searching for dark-matter waves with pulsar polarimetry 13m
In this talk I will explain how the polarization of photons emitted by astrophysical sources might be altered as they travel through a medium of dark matter composed of ultra light axion-like particles (ALPs). I will describe a new, more robust, analysis we delevoped to search for this effect. Afterwards, I will show the resulting strong limits on the axion-photon coupling for a wide range of masses. Finally, I will comment on possible optimal targets and the potential sensitivity to axionic dark-matter in this mass range that could be achieved using pulsar polarimetry in the future.
Speaker: Mr. Jorge Terol (Instituo de Astrofísica de Canarias (IAC))
Fuzzy.ppt
Dark-matter halo shapes from fits to SPARC galaxy rotation curves 13m
We fit galactic rotation curves obtained by SPARC from dark matter haloes that are
not spherically symmetric, but allowed to become prolate or oblate with a higher-
multipole density distribution. This is motivated by observing that the flattening of
v(r)=constant is the natural Kepler law due to a filamentary rather than a spherical
source, so that elongating the distribution could bring about a smaller chi squared,
all other things being equal. We compare results with different dark matter profiles
and extract the best fits to the ellipticity computing cosmological simulations of dark
matter haloes.
[1] Bariego Quintana, Adriana; Llanes-Estrada, Felipe and Manzanilla Carretero, Óliver (2021). Dark-
matter prolate halo shapes from fits to SPARC galaxy rotation curves. Proceedings of the EPS-HEP2021
(arXiv:2109.11153 [hep-ph])
[2] Llanes-Estrada, Felipe. Elongated Gravity Sources as an Analytical Limit for Flat Galaxy Rotation
Curves., Universe 7 (2021) 346; 10.13140/RG.2.2.35022.41289.
[3] Allgood et al (2006). The Shape of Dark Matter Halos: Dependence on Mass, Redshift, Radius,
and Formation. Monthly Notices of the Royal Astronomical Society. 367. 1781 - 1796. 10.1111/j.1365-
2966.2006.10094.x.
Speaker: Mrs. Adriana Bariego (IFT-UAM)
AdrianaBariego_DM_halo_shapes.pptx
Shedding light on low-mass subhalo survival with numerical simulations 13m
In this work, we carry out a suite of specially-designed numerical simulations to shed further light on dark matter (DM) subhalo survival at mass scales relevant for gamma-ray DM searches, a topic subject to intense debate nowadays. Specifically, we have developed and employed an improved version of DASH, a GPU N-body code, to study the evolution of low-mass subhaloes inside a Milky Way-like halo with unprecedented accuracy. We have simulated subhaloes with varying mass, concentration, and orbital properties, and considered the effect of the gravitational potential of the Milky-Way galaxy itself. In addition to shedding light on the survival of low-mass galactic subhaloes, our results will provide detailed predictions that will aid current and future quests for the nature of DM.
Speaker: Mrs. Alejandra Aguirre (IFT UAM)
survival-flipmar.ppsx
Convener: Dr. Maria Moreno Llácer (IFIC (CSIC-UV), Valencia)
Probing the interaction of the Higgs boson and the top-quark to explore the origin of the masses of elementary particles. 13m
Exploring the mechanism that explains the origin of the masses of elementary particles, fermions and gauge bosons, remains one of the main objectives of the Particle Physics program of the LHC. One experimental probe consists of measuring the strength of the interaction between the Higgs boson and the Top quark, named top-Yukawa coupling, using the full dataset collected by the ATLAS experiment during the Run 2 operational period of the proton-proton collider LHC. Exhaustive studies of those processes that involve the associated production of Higgs bosons and Top quarks carry out in the ATLAS collaboration are reviewed. In particular, the associated Higgs production with a single top quark has the potential to measure the size and the sign of the top-Yukawa coupling. The exploration of this process is challenging due to the small rate predicted by the current theory of the Standard Model. Therefore, sophisticated analysis techniques that integrate Machine Learning developments are needed. Such rare process cannot be observed even with the full LHC Run-2 statistics and indeed an observation of this signal would be a clear indication of new physics beyond the Standard Model, as it would imply deviations from the expected value of both the sign and magnitude of the top-Yukawa coupling.
Speaker: Susana Cabrera Urbán (IFIC-CSIC)
FlipPhysics-SusanaCabrera.pdf
Measurement of the quadruple-differential angular decay rates of single top quark produced in the t-channel at sqrt(s)=13 TeV 13m
The fact that the top quark's lifetime is smaller than its hadronization and depolarization timescales makes its production and decay kinematic properties an important probe of physical processes beyond the Standard Model (SM). The challenging analysis of the fully differential top-quark decay will probe the tWb vertex structure using single-top-quark events at a center-of-mass energy of 13 TeV at the LHC, using the full Run 2 dataset, with the ATLAS detector. Simultaneous measurement of the five generalized W boson helicity fractions and two phases, the polarisation in three orthogonal directions of the produced top quark as well as the t-channel production cross-section will be performed. This study is exceptional as it uses a novel model-independent framework proposed in EPJ C77 (2017) 200 and a large amount of data from proton-proton collisions of an integrated luminosity 139 fb-1. After measuring the relevant physical quantities previously mentioned, it will be possible to put stringent limits to EFT complex operators of the tWb vertex. The same measurement can be performed with early Run 3 data and constrain EFT parameters at a different energy scale. Deviations from expected values would provide hints of physics beyond the SM, and furthermore, complex values could imply that the top-quark decay has a CP-violating component.
Speaker: Mariam Chitishvili (Instituto de Fisica Corpuscular (IFIC) - CSIC/UV)
FlipPhysics-Mariam.pdf
Design of an alpha contamination detector with high sensitivity 13m
Particle Physics' experiments are currently searching for events whose probability is extremely low, such as the neutrinoless double beta decay or dark matter candidates such as WIMPs. This is what causes the need to perform highly sensitive experiments in subterranean facilities that shield from cosmic rays and environmental radiation. However, there is a radiation which is always present, that from Radon.
The goal of my work is the design and development, simulating in the REST environment, of this alpha detector. Such a detector must be able to characterize the alpha background caused by the decay chain of 222Rn in the active volume of the detector and that of its products on the internal surfaces (especially the 210Po, whose decay period is longer than that of the rest of the isotopes of the chain). To this end, I am characterizing and studying the response of this alpha detector, which is still under development by GIFNA and whose final result will be of great interest for the experiments being carried out at the LSC facilities.
Speaker: Mrs. Ana Quintana (Universidad de Zaragoza)
AlphaCAMM_presentation.pptx
Application of a quantum algorithm to Feynman loop integrals 13m
In this talk we present a quantum algorithm application for Feynman loop integrals. We propose a proper modification of Grover's algorithm for the identification of causal singular configurations of multiloop Feynman diagrams. The quantum algorithm is implemented in two different quantum simulators, the output obtained is directly translated to causal thresholds needed for the causal representation in the loop-tree duality.
Speaker: Mrs. Norma Selomit (IFIC-UV)
2022FlipPhysics-SelomitR.pdf
Virtual Tour ATLAS Salón de Actos del IFIC
Salón de Actos del IFIC
Virtual Tour ATLAS
Conveners: Emma Torró Pastor (chair) (IFIC), Maria Moreno Llácer (chair) (IFIC (CSIC-UV), Valencia)
Friday, 25 March
Particle Physics Salón de Actos del IATA
Searches for New Physics at neutrino experiments 35m
Neutrinos are the most elusive particles in the Standard Model. Despite being so abundant in the Universe, we still do not know many of their properties: how massive are they? how many neutrinos are there? is there CP violation in the leptonic sector? do they have a connection to the dark matter, or new interactions that we are unaware of? In this talk I will present an overview of neutrino phenomenology and new physics searches using current and future neutrino experiments.
Speakers: Pilar Coloma (Universidad Autónoma de Madrid), Dr. Pilar Coloma (IFT)
coloma-flipPhysics.pdf
Theoretical Aspects of Flavour Physics 35m
Finding the organising principle of the flavour sector is one of the big challenges in particle physics:
a) why are there three generations of fermions?
b) why is the up quark about 100,000 times lighter than the top quark, although they have the same gauge quantum numbers?
c) why do the three generations of quarks hardly mix, whereas the three lepton generations have large mixing?
d) could there be more than three generations?
e) how many neutrinos are there?
f) could there be also more generations of Higgs particles?
In this talk, different theoretical ideas and possible experimental tests will be discussed.
Speakers: Dr. Claudia Hagedorn (UV), Claudia Hagedorn (IFIC - UV/CSIC)
Hagedorn_FlipPhysics.pdf
Experimental Particle Physics with the ATLAS Detector 35m
One of the goals of particle physics is to explain the structure of matter at the smallest distance scales. For decades, the properties of the basic building blocks of matter have been investigated in ever greater detail. However, even today some profound but simple questions, such as the origins of dark matter in the universe, remain unanswered. The attempt to understand the material world around us in the simplest possible terms has involved ingenious feats of scientific sleuthing. Such fundamental questions are being addressed by using the ATLAS experiment to look at the high-energy collisions produced at the CERN Large Hadron Collider. These energetic collisions provide, for a brief instant, the energy necessary to produce new forms of matter, as was done a fraction of a second after the big bang. This presentation will illustrate how we use a very large-scale collider to probe the incredibly small, which can provide answers to questions on a universal scale!
Manuella Vincter is a Canada Research Professor of Physics at Carleton University and a Fellow of the Royal Society of Canada. Her primary research focus is with the ATLAS experiment at the CERN Laboratory in Geneva, Switzerland where she is the ATLAS Deputy Spokesperson. ATLAS is one of the defining experiments of its generation; its results help elucidate such fundamental questions of physics as the origins of mass and the existence of dark matter in the universe.
Speaker: Prof. Manuella Vincter (Carleton University, ATLAS)
talk_FlipPhysics_ManuellaVincter.pptx
Convener: Dr. Maria Moreno Llácer (chair) (IFIC (CSIC-UV), Valencia)
Discovering the Compact Muon Solenoid Experiment at CERN 35m
Discovering the Compact Muon Solenoid Experiment at CERN
Would you like to know what we do at the European Organization for Nuclear Research with proton collisions?
Learn about amazing physics driven by high level physicists from all over the world.
Discover a huge breadth of research topics, from the discovery of the Higgs boson to searches of the unknown.
Speaker: Barbara Alvarez Gonzalez (Universidad de Oviedo)
BarbaraAlvarez_flipphysics.pdf BarbaraAlvarez_flipphysics.pptx
Flavourful footprints towards TeV scale Physics 35m
In the last few years flavor experiments have been reporting deviations with respect to the expected predictions from the Standard Model. These anomalies share some patterns of lepton flavor universality violation and seem to suggest new physics at the (hopeful) TeV scale. Many attempts have been already pursued in our community trying to understand these signals, employing from any sort of simplified model to magnificent model building with a wide range of extra matter fields and gauge symmetries. In this talk we will consider a simple extension of the Standard Model based on the Pati-Salam's idea of quark-lepton unification. This economical and motivated theory turns out to predict the needed ingredients to accommodate such potential new physics and can be naturally realized at the low scale. As a renormalizable completion of the Standard Model, it predicts non-trivial signatures and correlations amongst observables that may allow its testability in a not-too-distant future.
Speaker: Dr. Clara Murgui (Caltech)
XarlaFlipPhysicsvT-compressed.pdf
Experimental Particle Physics (LHCb) 35m
Speaker: Dr. Carla Marin (CERN (LHCb))
20220325-FlipPhysics.pdf
Axion and ALP landscape 35m
Speaker: Prof. Belen Gavela (UAM)
Gavela-Flip_Physics-Valencia2022.pdf
Lunch Cafeteria del IFIC
Cafeteria del IFIC
Symbolic prizes
Summary: Overview and closing
closing.pdf | CommonCrawl |
Probabilistic distance-based quantizer design for distributed estimation
Yoon Hak Kim ORCID: orcid.org/0000-0003-4577-53471
EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 91 (2016) Cite this article
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
In distributed estimation systems where spatially separated sensor nodes are battery-powered and operate under strict limitations on wireless communication bandwidth, the sensor nodes measure the parameter of interest, quantize their measurements, and send the quantized data to a fusion node which then performs estimation of the parameter. It is reported that the rate-distortion performance can be greatly improved by adopting efficient quantizers at nodes as compared with simple uniform quantizers.
In distributed source coding (DSC) framework where nodes at different locations collect data and transmit them to a fusion node, practical design techniques for quantizers have been reported [1–3]: to achieve the Wyner-Ziv bound, trellis codes for DSC that are computationally efficient were proposed in [1]. An iterative design for quantizer in the Lloyd algorithm framework was shown to further reduce the distortion by producing the non-regular scalar quantizers, implying that several disjoint intervals can be mapped to a single codeword [2]. In addition, since various practical design algorithms have been developed in the Lloyd algorithm framework, they can be typically affected by initialization of quantizers, leading to numerous poor local optima. Thus, to overcome this, an iterative algorithm was proposed by using a deterministic annealing technique for a robust DSC system [3]. An iterative algorithm for construction of the optimal quantization partitions was studied in distributed estimation systems [4] where computation of the estimator function used in encoding of such partitions may be practically prohibitive. To avoid the encoding complexity, a suboptimal approach (i.e., linear estimator) was considered for quantizer design [5]. Under the assumption of one-bit message from nodes to fusion node, universal decentralized estimation schemes were investigated [6] and distributed estimators were derived for pragmatic signal models [7]. A brief investigation for quantization in distributed estimation and classification systems was presented in [8].
Since standard quantization focuses on minimization of a local metric (e.g., reconstruction error of local sensor readings), it should be modified to optimize at each step a global metric such as estimation error: more specifically, the two main tasks in the Lloyd design framework (i.e., quantization partition construction and the corresponding codeword computation) should be dedicated to minimizing the estimation error. However, the design difficulty arises since the quantization partitions constructed to minimize the estimation error are not generally independently encodable at each node: that is, encoding (or mapping) of local measurements into one of the quantization partitions would be possible after computing the global metric, which is not accessible at each node where only local measurements are available.
To circumvent the difficulty, a distributional distance between the distributions under two hypotheses was suggested as a global metric for quantizer design to yield a manageable design procedure [9]. A distance-based metric to measure the loss due to quantization was adopted for uniform or nonuniform quantizers in the high-resolution regime [10]. A vector quantization technique that minimizes the Kullback Leibler (KL) divergence was proposed for distribution matching [11]. An iterative design algorithm that maximizes the minimum asymptotic relative efficiency (ARE) was proposed, illustrating that the score-functional quantizer (SFQ) would be optimal so as to maximize the minimum ARE for distributions with the monotonicity property [12]. For acoustic sensor networks, a distance error at nodes was proposed in the functional quantization framework to ensure convergence in an iterative process [13]. A weighted sum of both of the metrics was proposed as a cost function (i.e., local + λ × global) along with a search for proper weights that guarantee construction of the encodable quantization partitions while maintaining the non-increasing cost function at iterations [14], showing a significant performance gain over typical designs. To reduce design complexity, efficient algorithms that allow us to search sequentially the boundary values of the quantization intervals in scalar quantizers have been developed [15, 16].
It was also observed in [17] that multiple disjoint quantization bins at nodes can be merged to a single bin or a single codeword without performance loss for distributed estimation systems. The merging technique determines the true bin transmitted from the node of interest by taking into account the measurements from other nodes, hence achieving a significant rate reduction. In addition, an iterative quantizer design algorithm was proposed [18] to incorporate non-regularity into design process, implying the correspondence between multiple disjoint partitions and a single codeword: specifically, the bins (e.g., intervals in scalar quantizers) are regarded as the elements that will be processed for mapping to their corresponding codewords (or reconstruction values), resulting in disjoint Voronoi regions (e.g., union of multiple intervals in scalar quantizers). Recently, a novel encoding scheme of assigning multiple codewords to each quantization partition was proposed to implement a low-weight independent encoding of the optimal partitions [19].
In this paper, we consider an iterative design of independently operating local quantizers at nodes in the Lloyd algorithm framework. Instead of directly minimizing the estimation error, we choose to use an indirect metric related to the posterior distribution. Specifically, we define quantization of the posterior distribution and focus on minimizing the probabilistic distance between the posterior distribution and its quantized version as a global cost function. We first express the cost function as the KL divergence which typically causes a high computational cost. We develop a feasible design procedure by presenting the analysis that minimizing the KL divergence is equivalently reduced to maximizing the logarithmic quantized posterior distribution on the average which is properly further simplified as a new cost function, yielding a substantial reduction in design complexity. We discuss that the proposed algorithm converges to a global optimum due to the convexity of the cost function in the quantized posterior distribution [20], which is experimentally examined by showing that our design operates robust under various test conditions. We also show that the proposed quantizer generates the most informative quantized measurements, which would be efficiently used by estimation techniques at fusion node to improve the estimation performance.
We highlight that the independent encoding minimizing the global cost function can be accomplished and also provide an efficient approximation to the encoding technique to avoid a computational burden at each node in practical use while maintaining a reasonable estimation performance. Note that most of the previous work conduct encoding of local measurements by simply computing the local Euclidean distance between sensor readings and codewords. We finally demonstrate through extensive simulations that the proposed algorithm performs well with respect to the previously developed techniques [14, 18] owing to the two main advantages of our proposed algorithm such as the global optimality and the encoding technique designed to optimize the system-wide metric. In this work, it is assumed that the sensor nodes do not exchange data with each other; they send their measurements to a fusion node via reliable communication links.
This paper is organized as follows. The problem formulation of the quantizer design is given in Section 2. A new cost function is introduced and properly incorporated into our iterative design algorithm in Section 3. Discussions of performance of the proposed algorithm and design complexity for our encoding technique are provided in Section 3.1 and Section 3.2. An application example for the proposed algorithm is briefly presented in Section 4. Simulation results are given in Section 5, and the conclusions are found in Section 6.
We consider a distributed estimation system where M sensor nodes are randomly deployed at known spatial locations, x i ∈R 2,i = 1,…,M. Each node senses the signals generated from the unknown parameter θ⊂R N and sends its measurement to a fusion node for estimation of the parameter. Assuming the sensing model f i (θ,x i ) employed at node i, the measurement at node i denoted by z i can be expressed as follows:
$$ z_{i}(\mathbf{\theta})~=~f_{i}\left(\mathbf{\theta},\mathbf{x}_{i}\right)+\omega_{i},\quad i~=~1,\ldots,M $$
where the measurement noise ω i is assumed to be approximated by normal distribution \(N(0,{\sigma _{i}^{2}})\), and the measurements are also assumed to be statistically independent of each other given the parameter; that is, \(p(z_{1},\cdots \,z_{M}|\theta)=\prod _{i~=~1}^{M} p(z_{i}|\theta)\). Each node uses an R i -bit quantizer with quantization level \(L_{i}~=~2^{R_{i}}\phantom {\dot {i}\!}\) and the dynamic range \(D_{i}~=~\left [z_{i}^{\text {min}}\quad z_{i}^{\text {max}}\right ]\). Note that the quantization range D i can be determined for nodes, based on their respective sensing ranges. Each node quantizes its measurement and generates the codeword \(\hat {z}_{i}\) for z i according to its encoding rule (e.g., minimum Euclidean distance rule). For example, if the measurement z i belongs to the jth quantization partition \({V_{i}^{j}}\) by using its encoding rule, the node i will transmit the jth codeword \(\hat {z}_{i}^{j}\) to a fusion node which produces an estimate of the parameter, \(\hat {\mathbf {\theta }}\) from the received quantized measurements, \(\hat {z}_{i}, i~=~1,\ldots,M\) from all nodes.
Notation: A large proportion of our notation will be introduced as needed. However, a couple of basic notations will be given now: the bold characters \(\mathbf {z}_{1}^{M}\) and \(\hat {\mathbf {z}}_{1}^{M}\) indicate a vector of measurements (z 1,⋯,z M ) and a vector of codewords \((\hat {z}_{1},\cdots,\hat {z}_{M})\), respectively, and the parameter θ is treated as a vector of parameters (θ 1,⋯,θ N ). In addition, \(\mathbf {z}_{1/i}^{M}\) is the shortened notation for a vector of M − 1 measurements (z 1,⋯,z i − 1,z i + 1,⋯,z M ), implying that the subscript i indicates the element omitted from the set of measurements.
Criteria for quantizer optimization
Obviously, quantizers optimized in the Lloyd framework for distributed estimation systems should seek to minimize the estimation error, \(\parallel \theta -\hat {\theta }\parallel ^{2}\) which a function of all of the codewords generated from M nodes involved. Thus, the quantization partitions and their corresponding codewords are iteratively generated to reduce the estimation error at each step while such quantization partitions remain independently encodable at each node.
To ensure the independent encoding and minimization of the estimation error which are the two crucial conditions for quantizer design algorithms in distributed estimation systems, several global metrics related to the estimation error were previously developed: the distributional distance [9] for distributed detection and the global distance function for distributed estimation [13]. In this perspective, we suggest quantization of the posterior distribution \(p(\theta |\mathbf {z}_{1}^{M})\) and seek to design local quantizers that minimize the probabilistic distance between \(p(\theta |\mathbf {z}_{1}^{M})\) and its quantized distribution which can be expressed as the KL divergence [20]. We show that using the distance as a new cost function provides several benefits for quantizer design in distributed estimation: first, minimizing the probabilistic distance results in quantizers that generate the codewords maximizing the logarithmic quantized posterior distribution \(\log p(\theta |\hat {\mathbf {z}}_{1}^{M})\) on the average, thus improving the estimation accuracy. Second, the independent encoding can be efficiently performed since the probabilistic distance is computed based on \(p(\theta |\mathbf {z}_{1}^{M})\) and M quantizers, not requiring the actual measurements at the other nodes. Third, it could allow us to establish a global encoding of local measurements into their quantization partitions which would not be achieved by typical encoding rules (e.g., minimum Euclidian distance rule) used for previous novel design techniques. The benefits of our algorithm will be elaborated in the following sections.
Quantizer design algorithm
We consider for a given rate R i ,i = 1,⋯,M the problem of designing independent local quantizers that minimize the KL divergence between the posterior distribution \(p(\theta |\mathbf {z}_{1}^{M})\) and its quantized one denoted by \(q(\theta |\mathbf {z}_{1}^{M})\) which is defined from quantization of \(p(\theta |\mathbf {z}_{1}^{M})\): formally,
$$ q\left(\theta|\mathbf{z}_{1}^{M}\right)~=~p\left(\theta|\hat{\mathbf{z}}_{1}^{M}\right), \quad Q_{i}(z_{i})~=~\hat{z}_{i}, i~=~1,\cdots,M $$
where Q i indicates the quantizer employed at node i.
First, we simplify our metric denoted by \(D_{\text {KL}}\left [p\left (\theta |\mathbf {z}_{1}^{M}\right)||q\left (\theta |\mathbf {z}_{1}^{M}\right)\right ]\) to avoid unnecessary computations for quantizer design at each node. By definition of the KL divergence, we have
$$\begin{array}{@{}rcl@{}} D_{\text{KL}}&=&\sum_{\theta} p(\theta) \sum_{\mathbf{z}_{1}^{M}} p\left(\mathbf{z}_{1}^{M}|\theta\right) \frac{\log p\left(\theta|\mathbf{z}_{1}^{M}\right)}{\log q\left(\theta|\mathbf{z}_{1}^{M}\right)}\\ &=& E_{\theta,\mathbf{z}_{1}^{M}} \log p\left(\theta|\mathbf{z}_{1}^{M}\right) - E_{\theta,\mathbf{z}_{1}^{M}} \log q\left(\theta|\mathbf{z}_{1}^{M}\right) \end{array} $$
Noting that the first term is irrelevant to minimization of the metric over quantizers Q i , we can find the quantizers Q ∗=[Q 1,⋯,Q M ] minimizing the KL divergence as follows:
$$\begin{array}{@{}rcl@{}} \mathbf{Q}^{*}&=&\arg \max_{Q_{1},\cdots,Q_{M}} E_{\theta,\mathbf{z}_{1}^{M}} \log q\left(\theta|\mathbf{z}_{1}^{M}\right)\\ &=&\arg \max_{Q_{1},\cdots,Q_{M}} E_{\mathbf{z}_{1}^{M}} E_{\theta|\mathbf{z}_{1}^{M}}\log q(\theta|\mathbf{z}_{1}^{M}) \end{array} $$
Thus, our problem is reduced to that of designing a set of quantizers that maximize the metric \(E_{\theta,\mathbf {z}_{1}^{M}} \log q\left (\theta |\mathbf {z}_{1}^{M}\right)\).
It should be noticed that we optimize a quantizer at each node, while quantizers for the other nodes remain unchanged. This is done successively for each sensor node and repeated over all nodes until a stopping criterion is satisfied. This notion allows us to make a further simplification of the metric for faster computation by removing irrelevant terms:
$$\begin{array}{@{}rcl@{}} &&{}E_{\theta,\mathbf{z}_{1}^{M}} \log q\left(\theta|\mathbf{z}_{1}^{M}\right)\\ {}&=&E_{\theta,\mathbf{z}_{1}^{M}} \left[ \log q\left(\mathbf{z}_{1}^{M}|\theta\right)+ \log p(\theta)-\log q\left(\mathbf{z}_{1}^{M}\right)\right]\\ {}&=&E_{\theta,\mathbf{z}_{1}^{M}} \left[ \log q\left(\!\mathbf{z}_{1/i}^{M}|\theta\!\right) + \log p(\theta) +\log q(z_{i}|\theta) -\log q\left(\mathbf{z}_{1}^{M}\right)\!\right]\quad\quad \end{array} $$
$$\begin{array}{@{}rcl@{}} {}&\propto&E_{\theta,\mathbf{z}_{1}^{M}} \left[ \log q(z_{i}|\theta) -\log q\left(\mathbf{z}_{1}^{M}\right)\right] \end{array} $$
where (5) follows from independence of \(\mathbf {z}_{1}^{M}\) given the parameter and (6) follows from the observation that the first and the second terms in (5) are irrelevant for quantizer design at node i. Note that \(q=p\left (\hat {z}_{i}^{j}|\theta \right)\) when z i is assigned to the jth quantization partition or the jth codeword.
Now, we are in a position to consider the quantizer design process in the generalized Lloyd design framework. First, we construct the Voronoi region so as to maximize (6) as follows:
$${} {{\begin{aligned} {V_{i}^{j}}&=\{z_{i}: E_{\theta} \left[p(z_{i}|\theta)\left(\log p(\hat{z}_{i}^{j}|\theta)- E_{\mathbf{z}_{1/i}^{M}|\theta} \log p\left(\hat{\mathbf{z}}_{1/i}^{M},z_{i}=\hat{z}_{i}^{j}\right)\right)\right]\\ &\geq E_{\theta} \left[\!p(z_{i}|\theta)\left(\!\log p\left(\hat{z}_{i}^{k}|\theta\right)- E_{\mathbf{z}_{1/i}^{M}|\theta} \log p\left(\hat{\mathbf{z}}_{1/i}^{M},z_{i}=\hat{z}_{i}^{k}\right)\!\right)\!\right], \forall k\neq j\} \end{aligned}}} $$
where \(p\left (\hat {z}_{i}^{j}|\theta \right)\) is given by \(p\left (z_{i}=\hat {z}_{i}^{j}|\theta \right)\sim N\left (\,f_{i}(\theta), {\sigma _{i}^{2}}\right)\).
Second, we compute the codeword corresponding to \({V_{i}^{j}}\) in a similar manner:
$$ \begin{aligned} \hat{z}_{i}^{j*}&= \arg\max_{\hat{z}_{i}\in D_{i}} E_{\theta} \left[ \sum_{z_{i}\in {V_{i}^{j}}} p(z_{i}|\theta)\left(\log p\left(\hat{z}_{i}|\theta\right)\right.\right.\\ &\quad\left.\left. - E_{\mathbf{z}_{1/i}^{M}|\theta} \log p(\hat{\mathbf{z}}_{1/i}^{M},z_{i}=\hat{z}_{i}\right){\vphantom{\sum_{z_{i}\in {V_{i}^{j}}}}}\right] \end{aligned} $$
It should be observed that in real situations, local quantizers should operate independent of other quantizers. Hence, an independent encoding would be a crucial requirement for such quantizer designs. We present the encoding technique that assigns a local measurement z i to one of the quantizer partitions so as to maximize the metric employed in the design as follows:
$$ \begin{aligned} V_{i}^{j*}&=\arg\max_{1\leq j\leq L_{i}} E_{\theta} \left[ p(z_{i}|\theta)\left(\log p\left(\hat{z}_{i}^{j}|\theta\right)\right.\right.\\ &\quad\left.\left. - E_{\mathbf{z}_{1/i}^{M}|\theta} \log p\left(\hat{\mathbf{z}}_{1/i}^{M},z_{i}~=~\hat{z}_{i}^{j}\right)\right)\right] \end{aligned} $$
Obviously, the encoding process in (9) is carried out by using \(p(\mathbf {z}_{1}^{M}|\theta)\) and M quantizers without requiring actual measurements \(\mathbf {z}_{1/i}^{M}\) at the other nodes.
Remarks on optimality and performance
Since the proposed algorithm is conducted in the generalized Lloyd design framework, it would suffer from numerous poor local optima. However, the metric (3) is shown to be convex in the quantized distribution \(q(\theta |\mathbf {z}_{1}^{M})\) given \(p(\theta |\mathbf {z}_{1}^{M})\) (see [20] for the proof), implying that any local minimum must be a global minimum. In addition, the quantized distribution is uniquely determined by quantizers (refer to (2)) and designing quantizers that reduce the metric at each step is equivalent to finding the corresponding quantized distributions. Thus, it is concluded that our algorithm always results in quantizers that globally minimize the metric and thus provide robustness to various design factors.
It has been also shown from (4) that minimizing D KL over quantizers is equivalent to maximizing the average logarithmic quantized posterior distribution. For example, suppose that given two different sets of M quantizers, say \(\mathbf {Q}_{1}^{M}\) and \(\tilde {\mathbf {Q}}_{1}^{M}\) where \(\mathbf {Q}_{1}^{M}\) indicates our proposed quantizers given by (4), a certain parameter θ is sensed by M nodes, which in turn generate \(\hat {\mathbf {z}}_{1}^{M}\) and \(\hat {\tilde {\mathbf {z}}}_{1}^{M}\), respectively. Then, it can be stated that the proposed quantizers generate better quantized measurements in a sense that \(\log p(\theta |\hat {\mathbf {z}}_{1}^{M})\geq \log p(\theta |\hat {\tilde {\mathbf {z}}}_{1}^{M})\) on the average. In a different perspective, the performance of the proposed quantizers would be further examined by rewriting our metric for each \(\mathbf {z}_{1}^{M}\) and simplifying it in the high-resolution regime as follows:
$$\begin{array}{@{}rcl@{}} \mathbf{Q}^{*}&=&\arg \max_{Q_{1},\cdots,Q_{M}} \sum_{\theta} p\left(\theta|\mathbf{z}_{1}^{M}\right)\log q\left(\theta|\mathbf{z}_{1}^{M}\right)\\ &=&\arg \max_{Q_{1},\cdots,Q_{M}} \sum_{\theta} p\left(\theta|\mathbf{z}_{1}^{M}\right)\log p\left(\theta|\hat{\mathbf{z}}_{1}^{M}\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} &\approx&\arg \max_{Q_{1},\cdots,Q_{M}} \sum_{\theta} p\left(\theta|\hat{\mathbf{z}}_{1}^{M}\right)\log p\left(\theta|\hat{\mathbf{z}}_{1}^{M}\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} &=&\arg \min_{Q_{1},\cdots,Q_{M}} H\left(\theta|\hat{\mathbf{z}}_{1}^{M}=\mathbf{Q}_{1}^{M}\left(\mathbf{z}_{1}^{M}\right)\right) \end{array} $$
where (10) follows from the definition of \(q\left (\theta |\mathbf {z}_{1}^{M}\right)\), (11) is derived from the high-resolution assumption, and (12) is obtained from the definition of the conditional entropy \(H\left (\theta |\hat {\mathbf {z}}_{1}^{M}\right)\). Since the entropy can be minimized by choosing the most informative distributions \(p\left (\theta |\hat {\mathbf {z}}_{1}^{M}\right)\), our quantizers would generate the most informative quantized measurements, yielding a good estimation accuracy which will be investigated by conducting extensive experiments in Section 5.
Reduction of encoding complexity
It should be emphasized that one of the benefits of our algorithm is the encoding technique that operates on local measurements and at the same time optimizes our global metric whereas most of the previous designs employ the minimum Euclidean distance rule to independently assign local measurements to the predetermined quantization partitions. In our design, encoding of z i to one of the partitions is independently executed in a system-wide sense that the metric \(E\log q\left (\theta |\mathbf {z}_{1}^{M}\right)\) is maximized, although such encoding requires a high computational cost at nodes.
In this section, we consider a computational reduction in the encoding complexity for a practical use of power-constrained sensor nodes in distributed systems. Noting that given z i , the region of θ denoted by A θ (z i ) with p(θ∈A θ (z i )|z i )≈1 can be easily constructed, the independent encoding in (9) could be approximately conducted as follows:
$$ \begin{aligned} V_{i}^{j*}&\approx\arg\max_{j} E_{\theta\in A_{\theta}} \left[ p(z_{i}|\theta) \left(\log p\left(\hat{z}_{i}^{j}|\theta\right)\right.\right.\\ &\quad\left.\left.- E_{\mathbf{z}_{1/i}^{M}\in B_{\theta}|\theta} \log p\left(\hat{\mathbf{z}}_{1/i}^{M},z_{i}~=~\hat{z}_{i}^{j}\right)\right)\right] \end{aligned} $$
where B θ is a set of the measurements at other nodes and can be substantially reduced again by using A θ : that is, \(B_{\theta }\approx \left \{z_{1/i}^{M}(\theta): \theta \in A_{\theta }\right \}\). This further approximation will reduce the encoding complexity dramatically.
Summary of algorithm
The design algorithm at node i is summarized as follows and is iteratively executed over all sensor nodes i = 1,…,M.
Application of quantizer design algorithm
In this section, as an application system of our design algorithm, we briefly introduce a source localization system where M nodes equipped with acoustic amplitude sensors measure signal energy generated from a source located at an unknown location θ∈R 2 and quantize the measurements before sending them to a fusion node for localization. In expressing the signal energy measured at nodes, we adopt an energy decay model which was proposed and experimentally verified in [21] and employed in [22, 23]. The signal energy measured at node i denoted by z i can be expressed as follows:
$$ z_{i}(\mathbf{\theta})~=~g_{i}\frac{a}{\left\|\mathbf{\theta}-\mathbf{x}_{i}\right\|^{\alpha}}+w_{i}, $$
where g i is the gain factor at node i and α is the energy decay factor which is approximately equal to 2 in free space. Note that a sound source generates acoustic energy which will attenuate at a rate inversely proportional to the square of the distance in free space [24]. The signal energy a which can be jointly estimated with the source location [25] is assumed to be known during localization process. It is also assumed that the measurement noise w i can be approximated using a normal distribution, \(N(0,{\sigma _{i}^{2}})\).
In this section, we design the proposed quantizers using training sets in which source locations are assumed to be uniformly distributed and the local measurements are collected from the model parameters α = 2, g i = 1, and a = 50 in a noiseless condition \({\sigma _{i}^{2}}~=~\sigma ^{2}~=~0\). In testing our quantizers, we apply two encoding techniques in (9) and (13), denoted by probabilistic distance-based quantizer (PDQ) and PDQ-reduced (PDQ-R), respectively. In the experiments, we first consider a sensor network where M(=5) sensors are deployed in a 10×10 m 2 sensor field. For each of 100 different sensor configurations, we design uniform quantizers (Unif Q), Lloyd quantizers (Lloyd Q), and several novel quantizers for R i = 2,3,4 and evaluate them by generating a test set of 1000 source locations from the model parameters which were assumed during quantizer design.
Experiments are extensively conducted to investigate the effectiveness of different design algorithms and the sensitivity to parameter perturbation and variation of noise level. Furthermore, since typical sensor networks employ many sensor nodes in a large sensor field, we also consider a larger sensor field 20×20 m 2 to test our algorithm over typical designs. In the experiments, performance evaluation is carried out by comparing the average localization error \(E\|\mathbf {\theta }-\hat {\mathbf {\theta }}\|^{2}\) computed from the maximum likelihood (ML) estimation technique for fast computation.
Comparison with traditional quantizers
First, our quantizer is compared with typical standard designs such as uniform quantizers and Lloyd Q in Fig. 1 where the localization error (meter) is averaged over 100 node configurations for each rate R i . For a clear comparison, the overall rate-distortion (R-D) curves are depicted for the different quantizations. As expected, PDQ provides a significant performance gain over traditional quantizers since our proposed algorithm iteratively finds the probabilistic distance-based mapping that generates better quantized measurements in a sense of better quantized posterior distribution. It should be further noticed that PDQ-R also shows a considerable performance improvement, implying justification of the approximation to derive our low-complexity encoding technique in (13).
Comparison of PDQ with typical design techniques. The average localization error is plotted vs. the total rate consumed by M nodes
Performance evaluation: comparison with the previous novel designs
We further examine the performance of the proposed design algorithm by comparing with the previous novel design techniques such as the localization-specific quantizer (LSQ) in [14] and the distributed optimized quantizer (DOQ) in [18]. Note that both of them have been developed as distributed source coding (DSC) techniques for distributed estimation systems and tested for source localization in acoustic amplitude sensor networks in the previous work. In designing quantizers, we initialize them with the equally distance-divided quantizer (EDQ) to avoid possibly poor local minima. Note that EDQ can be simply designed by uniformly dividing the sensing distance, not the dynamic range of the measurement. EDQ shows good localization performance so as to be used as an efficient initialization for quantizer design [14, 26].
In the experiments, we collect the two test sets from 1000 random source locations with the measurement noise σ i = 0 and σ i = 0.15, respectively, for evaluation. The R-D curves for the design techniques are illustrated in Fig. 2. Surprisingly, PDQ outperforms LSQ mainly because our algorithm enables the global encoding (in our case, probabilistic distance-based encoding) whereas LSQ operates by a regular encoding (i.e., minimum Euclidean distance rule). In addition, our quantizer performs well with respect to DOQ which adopts a non-regular mapping with a huge design complexity (see the details in [18]). Note that our algorithm focuses on minimization of the probabilistic distance caused by quantization, not directly optimizing the estimation accuracy. Nonetheless, PDQ offers a noteworthy performance improvement as compared with the previous novel designs, which can be explained from the analysis that our algorithm always produce a global optimum equipped with the powerful encoding technique whereas the others suffer from poor local optima that operate on a local distance rule.
Comparison of PDQ (PDQ-R) with novel design techniques. The average localization error in meter is plotted vs. the total rate (bits) consumed by five sensors with σ = 0 (left) and σ = 0.15 (right), respectively
Sensitivity analysis of design algorithms
In this section, we first examine the proposed algorithm by making perturbation of the mode parameters from those assumed in the design stage. We further need to investigate the performance of the different design algorithms in the presence of the measurement noise since the quantizers are designed by using the training sets generated from the assumption of noiseless measurements (σ = 0). It could be expected that our proposed algorithm will show a strong robustness to various design factors since it pursues a global optimum.
Sensitivity of PDQ to parameter perturbation
In this experiment, PDQ is tested under various types of parameter perturbation. We varied one of the model parameters (i.e., decay factor α and gain factor g i ) from what was used during the training stage of quantizers for each test. It is assumed that the true parameters are available at a fusion node for localization to inspect only the effect of the quantizer design on the localization performance. Note that the assumption is quite reasonable since the localization algorithms provide good robustness to the parameter perturbation (see [25]). The experimental results are given in Table 1. As expected, PDQ shows better robustness to variation of the gain factor than that of the decay factor since the latter causes more severe distortion in local measurements. Obviously, our design operates very reliably in the presence of a small perturbation of the model parameters.
Table 1 Localization error (LE) of PDQ with R i = 3 due to variations of the model parameters
Sensitivity of design algorithms to noise level
In this experiment, we study the sensitivity of various design algorithms to noise level. For each configuration, a test set of 1000 source locations with signal-to-noise ratio (SNR) in the range from 40 dB to 100 dB is generated by varying σ. Assuming the source signal energy a is known, the SNR is measured at 1 m from the source by using \(10\log _{10} \frac {a^{2}}{\sigma ^{2}}\). For typical applications, the variance of measurement noise amounts to σ 2 = 0.052 (= 60 dB) and can be often much higher than 40 dB for practical vehicle targets [21, 23]. As can be seen in Fig. 3, PDQ performs quite well with respect to the other novel designs in noisy cases.
Sensitivity to noise level. The average localization error is plotted vs. SNR (dB) with M = 5,R i = 3 and a = 50
Performance analysis in a larger sensor network: comparison with traditional quantizers
In this section, we evaluate our design algorithm in larger sensor networks by comparing with typical designs. In this experiment, we generate 20 different sensor configurations in a larger sensor field, 20×20 m 2 for M = 12,16,20. For each sensor configuration, our quantizers are designed with a given rate of R i = 3 and the same dynamic range as in the experiments conducted in a 10×10 m 2 sensor field. The localization results are provided in Fig. 4. It can be seen that our design algorithm provides very good performance compared with unform quantizers and Lloyd quantizers.
Performance evaluation in a larger sensor network. The average localization error is plotted vs. the total number of sensor nodes in a 20×20 m 2 sensor field with R i = 3
It should be mentioned that better performance can be generally achieved with a larger number of sensors while the sensor density remains unchanged. In our experiments, the sensor density for M = 20 in 20×20 m 2 is given by \(\frac {20}{20\times 20}~=~0.05 \) which is equal to that for the case of M = 5 in 10×10 m 2. This performance gain can be explained by taking into account the coverage of the sensing range of nodes which would become more efficient as the sensor field gets larger. In other words, the sensor nodes located around edges show their poor coverage of the sensing range, leading to performance degradation, and there are a relatively smaller number of sensor nodes near the edge in a larger sensor field as compared to a smaller field with the same sensor density.
In this paper, we have proposed an iterative quantizer design algorithm that seeks to minimize the probabilistic distance between the posterior distribution and its quantized one. The benefits of our algorithm are illustrated by the analysis that the independent encoding minimizing the global probabilistic distance can be implemented at each node and the global minimum is always guaranteed due to the convexity of the probabilistic distance in our quantizers. In addition, to avoid a computational burden at nodes for encoding process, we have suggested a low-complexity encoding technique which showed a reasonable performance. We demonstrated through extensive experiments that our proposed algorithm achieved a significant performance gain over typical designs and provided a strong competitiveness in comparison with the previous novel designs in terms of performance assessment. In the future, we will continue to develop creative perspectives on the quantization techniques that maximize application objectives for distributed systems.
ARE:
Asymptotic relative efficiency
DOQ:
Distributed optimized quantizer
DSC:
Distributed source coding
EDQ:
Equally distance-divided quantizer
Kullback Leibler
LE:
Localization error
LSQ:
Localization-specific quantizer
Maximum likelihood
PDQ:
Probabilistic distance-based quantizer
PDQ-R:
Probabilistic distance-based quantizer-reduced
R-D:
Rate-distortion
SFQ:
Score-functional quantizer
SS Pradhan, K Ramchandran, Distributed source coding using syndromes (DISCUS): design and construction. IEEE Trans. Inf. Theory. 49:, 626–643 (2003).
MathSciNet Article MATH Google Scholar
N Wernersson, J Karlsson, M Skoglund, Distributed quantization over noisy channels. IEEE Trans. Commun. 57:, 1693–1700 (2009).
A Saxena, J Nayak, K Rose, Robust distributed source coder design by deterministic annealing. IEEE Trans. Signal Process. 58:, 859–868 (2010).
W Lam, AR Reibman, Design of quantizers for decentralized estimation systems. IEEE Trans. Commun. 41(11), 1602–05 (1993).
JA Gubner, Distributed estimation and quantization. IEEE Trans. Inf. Theory. 39(4), 1456–1459 (1993).
Z-Q Luo, Universal decentralized estimation in a bandwidth constrained sensor network. IEEE Trans. Inf. Theory. 51(6), 2210–2219 (2005).
A Ribeiro, GB Giannakis, Bandwidth-constrained distributed estimation for wireless sensor networks—part ii:unknown probability density function. IEEE Trans. Signal Process. 54(7), 2784–2796 (2006).
RM Gray, in IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Quantization in task-driven sensing and distributed processing (IEEEToulouse, 2006).
M Longo, TD Lookabaugh, RM Gray, Quantization for decentalized hypothesis testing under communication constraints. IEEE Trans. Inf. Theory. 36(2), 241–255 (1990).
HV Poor, Fine quantization in signal detection and estimation. IEEE Trans. Inf. Theory. 34(5), 960–972 (1988).
A Hegde, D Erdogmus, T Lehn-Schioler, YN rao, JC Principe, in IEEE International Joint Conference on Neural Networks. Vector-quantization by density matching in the minimum Kullback-Leibler divergence sense (IEEE, 2004).
P Venkitasubramaniam, L Tong, A Swami, Quantization for maximum are in distributed estimation. IEEE Trans. Signal Process. 55(7), 3596–3605 (2007).
YH Kim, Functional quantizer design for source localization in sensor networks. EURASIP J. Adv. Signal Process. 2013(1), 10 (2013).
YH Kim, A Ortega, Quantizer design for energy-based source localization in sensor networks. IEEE Trans. Signal Process. 59(11), 5577–5588 (2011).
YH Kim, Weighted distance-based quantization for distributed estimation. J. Inf. Commun. Convergence Eng. 12(4), 215–220 (2014).
YH Kim, Maximum likelihood (ML)-based quantizer design for distributed estimation. J. Inf. Commun. Convergence Eng. 13(3), 152–158 (2015).
YH Kim, A Ortega, Distributed encoding algorithms for source localization in sensor networks. EURASIP J. Adv. Signal Process. 2010:, 13 (2010).
YH Kim, Quantizer design optimized for distributed estimation. IEICE Trans. Inf. Systems. E97-D(6), 1639–1643 (2014).
YH Kim, Encoding of quantisation partitions optimised for distributed estimation. Electron. Lett. 52(8), 611–613 (2016).
TM Cover, JA Thomas, Elements of Information Theory (Wiley-Interscience Publication, New York, 1991).
Book MATH Google Scholar
D Li, YH Hu, Energy-based collaborative source localization using acoustic microsensor array. EURASIP J. Appl. Signal Process. 2003:, 321–337 (2003).
Article MATH Google Scholar
AO Hero, D Blatt, in IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Sensor network source localization via projection onto convex sets (POCS) (IEEEPhiladelphia, 2005).
J Liu, J Reich, F Zhao, Collaborative in-network processing for target tracking. EURASIP J. Appl. Signal Process. 2003:, 378–391 (2003).
TS Rappaport, Wireless Communications:principles and Practice (Prentice-Hall Inc., New Jersey, 1996).
YH Kim, A Ortega, in IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Maximun a posteriori (MAP)-based algorithm for distributed source localization using quantized acoustic sensor readings (IEEEToulouse, 2006).
YH Kim, A Ortega, in IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Quantizer design for source localization in sensor networks (IEEEPhiladelphia, 2005).
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A1A2055997).
Department of Electronic Engineering, College of Electronic & information Engineering, Chosun University, 309 Pilmun-daero, Dong-gu, Gwangju, 61452, Korea
Yoon Hak Kim
Correspondence to Yoon Hak Kim.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Kim, Y.H. Probabilistic distance-based quantizer design for distributed estimation. EURASIP J. Adv. Signal Process. 2016, 91 (2016). https://doi.org/10.1186/s13634-016-0389-0
Distributed compression
Distributed source coding (DSC)
Quantizer design
Posterior distribution
KL divergence
Generalized Lloyd algorithm
Source localization | CommonCrawl |
A useful identity for Gell-Mann $su(3)$ matrices?
We have the following beautiful result for Pauli $su(2)$ matrices
$$(\vec{\sigma}\cdot\vec{a})(\vec{\sigma}\cdot\vec{b}) = \mathbb{I} ~\vec{a}\cdot\vec{b} + i (\vec{a} \times \vec{b}) \cdot \vec{\sigma}.$$
Do we have a similar structure for Gell-Mann $su(3)$ matrices? Specifically, what would the following be
$$(\vec{\lambda}\cdot\vec{a})(\vec{\lambda}\cdot\vec{b}) = ~?$$
lie-algebra linear-algebra clifford-algebra
W. VolteraW. Voltera
Yes, of course. The anticommutator for Gell-Mann matrices is somewhat more elaborate than for Pauli matrices, as there is also a d-coefficient, so splitting the $\lambda$-matrix bilinear into commutators and anticommutators yields $$ (\vec{\lambda}\cdot\vec{a})(\vec{\lambda}\cdot\vec{b}) = a^\mu \lambda^\mu ~b^\nu \lambda^\nu = a^\mu b^\nu \left (\tfrac{1}{2} [\lambda^\mu,\lambda^\nu] + \tfrac{1}{2} \{\lambda^\mu,\lambda^\nu \}\right )= \\ =a^\mu b^\nu ( if_{\mu \nu\kappa} \lambda^\kappa + d_{\mu\nu\kappa} \lambda^\kappa + \tfrac{2}{3} \delta_{\mu\nu} 1\!\!1) \\ =\tfrac{2}{3} 1\!\!1 a\cdot b +a^\mu b^\nu (if_{\mu \nu\kappa}+d_{\mu \nu\kappa})\lambda^\kappa, $$ the second term being analogous to the cross-product, except now it has both an antisymmetric and a symmetric piece.
Bonus point. Combining two octets will yield a reducible 64, $$8\otimes 8= 27\oplus\overline{10}\oplus10\oplus8\oplus8\oplus1 .$$ The symmetric singlet is explicit above (just as the SU(2) singlet for the Pauli matrices is), and the symmetric d term above reduces to one of the two 8 s, and not the 27.
The antisymmetric f term reduces to the other 8 and not the 10 and its conjugate.
The 8 hermitian matrices $m^\kappa _{\mu\nu}\equiv (if_{\mu\nu\kappa} +d_{\mu\nu\kappa} )$ are very sparse, much more so than their SU(2) angular momentum analogs. Their (imaginary) antisymmetric piece vanishes unless there are 1 or 3 indices from the set 2,5,7; and their (real) symmetric piece vanishes unless there is an even number of indices from the same set. For instance, $$ m^2= \begin{pmatrix} 0 & 0 & -i &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &1/\sqrt{3} \\ i & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &i/2 &-1/2 &0 \\ 0 & 0 & 0 &0 &0 &1/2 &i/2 &0 \\ 0 & 0 & 0 &-i/2 &1/2 &0 &0 &0 \\ 0 & 0 & 0 &-1/2 &-i/2 &0 &0 &0 \\ 0 & 1/\sqrt{3} & 0 &0 &0 &0 &0 &0 \end{pmatrix} , $$ and so on. Note this matrix is only 3/16 full!
Cosmas ZachosCosmas Zachos
$\begingroup$ Thanks, @Cosmas Zachos. Is that imaginary i just sitting with f_{\mu \nu k} or it should be outside the bracket, multiplying both f and d?. $\endgroup$ – W. Voltera Jul 4 '18 at 2:37
$\begingroup$ No, just f. Remember d is symmetric under μν interchange but f is antisymmetric, so i is required from hermitian transposition. $\endgroup$ – Cosmas Zachos Jul 4 '18 at 7:52
(Unfortunately) there is no such generalization: the properties of the Pauli matrices that make such identities possible are closely tied to the $\mathbb{Z}_2\times \mathbb{Z_2}$ graded structure of the matrices (see 2. below).
However, as part of this negative answer I will point you to the following:
Arvind, K. S. Mallesh and N. Mukunda, A generalized Pancharatnam geometric phase formula for three-level quantum systems, available from arxiv.
Patera, J., and H. Zassenhaus. The Pauli matrices in n dimensions and finest gradings of simple Lie algebras of type $A_{n− 1}$, Journal of Mathematical Physics 29.3 (1988): 665-673 (pre-arxiv, behind paywall). See also: Patera, J. The four sets of additive quantum numbers of SU(3). Journal of mathematical physics 30.12 (1989): 2756-2762 (also behind paywall).
The first will give you geometric relations similar to the cross product of Pauli matrices, and also a $\star$ operation on the Gell-Mann matrices, but not what you want. The second will provide you with an alternate basis (of non-hermitian but unitary matrices) that nevertheless have some nice properties (such as $A^3\sim 1_{3\times 3}$, which generalize some of the properties of the Pauli's.
(I wish someone can show my answer to be wrong as I'd love to know such a relation.)
ZeroTheHeroZeroTheHero
Not the answer you're looking for? Browse other questions tagged lie-algebra linear-algebra clifford-algebra or ask your own question.
How to get Gell-Mann matrices?
An identity of Pauli matrices
Dimension and Basis properties of $SU(N)$
Why $SU(3)$ has eight generators?
Sign choice for sigma-matrices
Pauli matrices in spherical coordinates
Fierz like identity for $\epsilon_{abc}\sigma^a_{ij}\sigma^b_{kl}\sigma^c_{pq}$
Transformation between Weyl and Dirac representation of Gamma matrices
Trace of 4 Gell-Mann matrices
Exponential of the Pauli matrices | CommonCrawl |
September 2014 , Volume 19 , Issue 7
Special issue dedicated to Mauro Fabrizio's 70th birthday
Sandra Carillo, Claudio Giorgi and Maurizio Grasselli
2014, 19(7): i-i doi: 10.3934/dcdsb.2014.19.7i +[Abstract](1144) +[PDF](74.7KB)
This Special Issue is dedicated to celebrate Mauro Fabrizio's 70th Birthday.
It is a pleasure and an honour for us to devote it to Mauro with deep appreciation and friendship for the scientist as well as for the man.
Mauro's wide and intense research activity touched many branches of Mathematical Physics. This fact is testified by the variety of subjects studied in the contributions collected here.
Mauro was born on December 17, 1940. He graduated in Bologna in 1965. Dario Graffi, a renowned italian mathematical physicist, was his advisor. He has been full professor in the Universities of Salerno and Ferrara before returning to his Alma Mater. Since 1967 to present he published over 160 papers and 5 books.
For over 45 years, he has been greatly influential through his research contributions in a several areas of Mechanics and Thermodynamics. In particular, the development of the mathematical modeling of Complex Systems.
In these areas, starting from Dario Graffi's ideas, Mauro obtained a number of important results in mathematical modeling in continuous thermomechanics, materials with fading memory and hereditary system, electromagnetism of continuous media, first and second order phase transition models.
Mauro has always been a stimulating and open minded Colleague as well as a reliable mentor to many young scientists within the mathematical community. His deep questions and sharp remarks are well known among all the people who had the chance to have him in the audience.
Among the several Mauro's recognitions, we only recall that, on June 22, 2012, the prestigious ``Premio Linceo per la Meccanica e applicazioni e Matematica" was bestowed upon him by Giorgio Napolitano, the President of the Italian Republic.
The study of complex systems is a multi-faceted area where many different mathematical tools come into play. From functional analysis to calculus of variations, from geometric analysis to semigroup theory and, of course, numerical methods.
The present volume collects 31 peer reviewed contributions of a number of leading scholars in the analysis of mathematical models. It aims to present an overview of some challenging research lines and to stimulate further investigations.
We are grateful to all the authors. They did a great job.
Sandra Carillo, Claudio Giorgi, Maurizio Grasselli. Foreword. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): i-i. doi: 10.3934/dcdsb.2014.19.7i.
Viscoelastic fluids: Free energies, differential problems and asymptotic behaviour
Giovambattista Amendola, Sandra Carillo, John Murrough Golden and Adele Manes
Some expressions for the free energy in the case of incompressible viscoelastic fluids are given. These are derived from free energies already introduced for other viscoelastic materials, adapted to incompressible fluids. A new free energy is given in terms of the minimal state descriptor. The internal dissipations related to these different functionals are also derived. Two equivalent expressions for the minimum free energy are given, one in terms of the history of strain and the other in terms of the minimal state variable. This latter quantity is also used to prove a theorem of existence and uniqueness of solutions to initial boundary value problems for incompressible fluids. Finally, the evolution of the system is described in terms of a strongly continuous semigroup of linear contraction operators on a suitable Hilbert space. Thus, a theorem of existence and uniqueness of solutions admitted by such an evolution problem is proved.
Giovambattista Amendola, Sandra Carillo, John Murrough Golden, Adele Manes. Viscoelastic fluids: Free energies, differential problems andasymptotic behaviour. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1815-1835. doi: 10.3934/dcdsb.2014.19.1815.
Effect of intracellular diffusion on current--voltage curves in potassium channels
Daniele Andreucci, Dario Bellaveglia, Emilio N.M. Cirillo and Silvia Marconi
We study the effect of intracellular ion diffusion on ionic currents permeating through the cell membrane. Ion flux across the cell membrane is mediated by specific channels, which have been widely studied in recent years with remarkable results: very precise measurements of the true current across a single channel are now available. Nevertheless, a complete understanding of this phenomenon is still lacking, though molecular dynamics and kinetic models have provided partial insights. In this paper we demonstrate, by analyzing the KcsA current-voltage currents via a suitable lattice model, that intracellular diffusion plays a crucial role in the permeation phenomenon. We believe that the interplay between the channel behavior and the ion diffusion in the cell is a key ingredient for a full explanation of the current-voltage curves.
Daniele Andreucci, Dario Bellaveglia, Emilio N.M. Cirillo, Silvia Marconi. Effect of intracellular diffusion on current--voltage curves in potassium channels. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1837-1853. doi: 10.3934/dcdsb.2014.19.1837.
Mixed norms, functional Inequalities, and Hamilton-Jacobi equations
Antonio Avantaggiati, Paola Loreti and Cristina Pocci
In this paper we generalize the notion of hypercontractivity for nonlinear semigroups allowing the functions to belong to mixed spaces. As an application of this notion, we consider a class of Hamilton-Jacobi equations and we establish functional inequalities. More precisely, we get hypercontractivity for viscosity solutions given in terms of Hopf-Lax type formulas. In this framework, we consider different measures associated with the variables; consequently, using mixed norms, we find new inequalities. The novelty of this approach is the study of functional inequalities with mixed norms for semigroups.
Antonio Avantaggiati, Paola Loreti, Cristina Pocci. Mixed norms, functional Inequalities, and Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1855-1867. doi: 10.3934/dcdsb.2014.19.1855.
On the multiscale modeling of vehicular traffic: From kinetic to hydrodynamics
Nicola Bellomo, Abdelghani Bellouquid, Juanjo Nieto and Juan Soler
This paper deals with the multiscale modeling of vehicular traffic according to a kinetic theory approach, where the microscopic state of vehicles is described by position, velocity and activity, namely a variable suitable to model the quality of the driver-vehicle micro-system. Interactions at the microscopic scale are modeled by methods of game theory, thus leading to the derivation of mathematical models within the framework of the kinetic theory. Macroscopic equations are derived by asymptotic limits from the underlying description at the lower scale. This approach shows the hypothesis under which macroscopic models known in the literature can be derived and how new models can be developed.
Nicola Bellomo, Abdelghani Bellouquid, Juanjo Nieto, Juan Soler. On the multiscale modeling of vehicular traffic: From kinetic to hydrodynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1869-1888. doi: 10.3934/dcdsb.2014.19.1869.
Mathematical modeling of phase transition and separation in fluids: A unified approach
Alessia Berti, Claudio Giorgi and Angelo Morro
A unified phase-field continuum theory is developed for transition and separation phenomena. A nonlocal formulation of the second law which involves an extra-entropy flux gives the basis of the thermodynamic approach. The phase-field is regarded as an additional variable related to some phase concentration, and its evolution is ruled by a balance equation, where flux and source terms are (unknown) constitutive functions. This evolution equation reduces to an equation of the rate-type when the flux is negligible, and it takes the form of a diffusion equation when the source term is disregarded. On this background, a general model for first-order transition and separation processes in a compressible fluid or fluid mixture is developed. Upon some simplifications, we apply it to the liquid-vapor phase change induced either by temperature or by pressure and we derive the expression of the vapor pressure curve. Taking into account the flux term, the sign of the diffusivity is discusssed.
Alessia Berti, Claudio Giorgi, Angelo Morro. Mathematical modeling of phase transition and separation in fluids: A unified approach. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1889-1909. doi: 10.3934/dcdsb.2014.19.1889.
Discontinuity waves as tipping points: Applications to biological & sociological systems
John Bissell and Brian Straughan
The `tipping point' phenomenon is discussed as a mathematical object, and related to the behaviour of non-linear discontinuity waves in the dynamics of topical sociological and biological problems. The theory of such waves is applied to two illustrative systems in particular: a crowd-continuum model of pedestrian (or traffic) flow; and an hyperbolic reaction-diffusion model for the spread of the hantavirus infection (a disease carried by rodents). In the former, we analyse propagating acceleration waves, demonstrating how blow-up of the wave amplitude might indicate formation of a `human-shock', that is, a `tipping point' transition between safe pedestrian flow, and a state of overcrowding. While in the latter, we examine how travelling waves (of both acceleration and shock type) can be used to describe the advance of a hantavirus infection-front. Results from our investigation of crowd models also apply to equivalent descriptions of traffic flow, a context in which acceleration wave blow-up can be interpreted as emergence of the `phantom congestion' phenomenon, and `stop-start' traffic motion obeys a form of wave propagation.
John Bissell, Brian Straughan. Discontinuity waves as tipping points: Applications to biological & sociological systems. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1911-1934. doi: 10.3934/dcdsb.2014.19.1911.
Singular limit of an integrodifferential system related to the entropy balance
Elena Bonetti, Pierluigi Colli and Gianni Gilardi
A thermodynamic model describing phase transitions with thermal memory, in terms of an entropy equation and a momentum balance for the microforces, is adressed. Convergence results and error estimates are proved for the related integrodifferential system of PDE as the sequence of memory kernels converges to a multiple of a Dirac delta, in a suitable sense.
Elena Bonetti, Pierluigi Colli, Gianni Gilardi. Singular limit of an integrodifferential systemrelated to the entropy balance. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1935-1953. doi: 10.3934/dcdsb.2014.19.1935.
An existence criterion for the $\mathcal{PT}$-symmetric phase transition
Emanuela Caliceti and Sandro Graffi
We consider on $L^2(\mathbb{R})$ the Schrödinger operator family $H(g)$ with domain and action defined as follows $$ D(H(g))=H^2(\mathbb{R})\cap L^2_{2M}(\mathbb{R}); \quad H(g) u=\bigg(-\frac{d^2}{dx^2}+\frac{x^{2M}}{2M}-g\,\frac{x^{M-1}}{M-1}\bigg)u $$ where $g\in\mathbb{C}$, $M=2,4,\ldots\;$. $H(g)$ is self-adjoint if $g\in\mathbb{R}$, while $H(ig)$ is $\mathcal{PT}$-symmetric. We prove that $H(ig)$ exhibits the so-called $\mathcal{PT}$-symmetric phase transition. Namely, for each eigenvalue $E_n(ig)$ of $H(ig)$, $g\in\mathbb{R}$, there exist $R_1(n)>R(n)>0$ such that $E_n(ig)\in\mathbb{R}$ for $|g| < R(n)$ and turns into a pair of complex conjugate eigenvalues for $|g| > R_1(n)$.
Emanuela Caliceti, Sandro Graffi. An existence criterion for the $\\mathcal{PT}$-symmetric phase transition. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1955-1967. doi: 10.3934/dcdsb.2014.19.1955.
Uniform weighted estimates on pre-fractal domains
Raffaela Capitanelli and Maria Agostina Vivaldi
We establish uniform estimates in weighted Sobolev spaces for the solutions of the Dirichlet problems on snowflake pre-fractal domains.
Raffaela Capitanelli, Maria Agostina Vivaldi. Uniform weighted estimates on pre-fractal domains. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1969-1985. doi: 10.3934/dcdsb.2014.19.1969.
Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects
Marcelo M. Cavalcanti, Valéria N. Domingos Cavalcanti, Irena Lasiecka and Flávio A. Falcão Nascimento
Wave equation defined on a compact Riemannian manifold $(M, \mathfrak{g})$ subject to a combination of locally distributed viscoelastic and frictional dissipations is discussed. The viscoelastic dissipation is active on the support of $a(x)$ while the frictional damping affects the portion of the manifold quantified by the support of $b(x)$ where both $a(x)$ and $b(x)$ are smooth functions. Assuming that $a(x) + b(x) \geq \delta >0 $ for all $x\in M$ and that the relaxation function satisfies certain nonlinear differential inequality, it is shown that the solutions decay according to the law dictated by the decay rates corresponding to the slowest damping. In the special case when the viscoelastic effect is active on the entire domain and the frictional dissipation is differentiable at the origin, then the overall decay rates are dictated by the viscoelasticity. The obtained decay estimates are intrinsic without any prior quantification of decay rates of both viscoelastic and frictional dissipative effects. This particular topic has been motivated by influential paper of Fabrizio-Polidoro [15] where it was shown that viscoelasticity with poorly behaving relaxation kernel destroys exponential decay rates generated by linear frictional dissipation. In this paper we extend these considerations to: (i) nonlinear dissipation with unquantified growth at the origin (frictional) and infinity (viscoelastic) , (ii) more general geometric settings that accommodate competing nature of frictional and viscoelastic damping.
Marcelo M. Cavalcanti, Val\u00E9ria N. Domingos Cavalcanti, Irena Lasiecka, Fl\u00E1vio A. Falc\u00E3o Nascimento. Intrinsic decay rate estimates for the wave equationwith competing viscoelastic and frictional dissipative effects. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 1987-2011. doi: 10.3934/dcdsb.2014.19.1987.
On a generalized Cahn-Hilliard equation with biological applications
Laurence Cherfils, Alain Miranville and Sergey Zelik
In this paper, we are interested in the study of the asymptotic behavior of a generalization of the Cahn-Hilliard equation with a proliferation term and endowed with Neumann boundary conditions. Such a model has, in particular, applications in biology. We show that either the average of the local density of cells is bounded, in which case we have a global in time solution, or the solution blows up in finite time. We further prove that the relevant, from a biological point of view, solutions converge to $1$ as time goes to infinity. We finally give some numerical simulations which confirm the theoretical results.
Laurence Cherfils, Alain Miranville, Sergey Zelik. On a generalized Cahn-Hilliardequation with biological applications. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2013-2026. doi: 10.3934/dcdsb.2014.19.2013.
Spatial behavior in the vibrating thermoviscoelastic porous materials
Stan Chiriţă
In this paper we study the spatial behavior of the amplitude of the steady-state vibrations in a thermoviscoelastic porous beam. Here we take into account the effects of the viscoelastic and thermal dissipation energies upon the corresponding harmonic vibrations in a right cylinder made of a thermoviscoelastic porous isotropic material. In fact, we prove that the positiveness of the viscoelastic and thermal dissipation energies are sufficient for characterizing the spatial decay and growth properties of the harmonic vibrations in a cylinder.
Stan Chiri\u0163\u0103. Spatial behavior in the vibrating thermoviscoelastic porous materials. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2027-2038. doi: 10.3934/dcdsb.2014.19.2027.
Asymptotic effects of boundary perturbations in excitable systems
Monica De Angelis and Pasquale Renno
A Neumann problem in the strip for the Fitzhugh Nagumo system is considered. The transformation in a non linear integral equation permits to deduce a priori estimates for the solution. A complete asymptotic analysis shows that for large $ t $ the effects of the initial data vanish while the effects of boundary disturbances $ \varphi_1 (t), $ $ \varphi_2(t) $ depend on the properties of the data. When $ \varphi_1,\,\, \varphi_2 $ are convergent for large $ t $, the solution is everywhere bounded and depends on the asymptotic values of $ \varphi_1 , $ $ \varphi_2 $. More, when $ \varphi_i \in L^1 (0,\infty) (i=1,2)$ too, the effects are vanishing.
Monica De Angelis, Pasquale Renno. Asymptotic effects of boundary perturbations in excitable systems. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2039-2045. doi: 10.3934/dcdsb.2014.19.2039.
Singular parabolic problems with possibly changing sign data
Ida De Bonis and Daniela Giachetti
We show the existence of bounded solutions $u\in L^2(0,T;H^1_0(\Omega))$ for a class of parabolic equations having a lower order term $b(x,t,u,\nabla u)$ growing quadratically in the $\nabla u$-variable and singular in the $u$-variable on the set $\{u=0\}$.
We refer to the model problem $$\left\{ \begin{array}{ll} u_t - \Delta u = b(x,t) \frac{|\nabla u|^2}{|u|^k} + f(x,t) & in \Omega \times (0,T)\\ u(x,t) = 0 & on \partial\Omega\times(0,T)\\ u(x,0) = u_0 (x) &
in \Omega \end{array}\right. $$ where $\Omega$ is a bounded open subset of $\mathbb{R}^N, N \geq 2, 0 < T < + \infty$ and $0 < k < 1$. The data $f(x,t), u_0(x)$ can change their sign, so that the possible solution $u$ can vanish inside $Q_T=\Omega\times(0,T)$ even in a set of positive measure. Therefore, we have to carefully define the meaning of solution. Also $b(x,t)$ can have a quite general sign.
Ida De Bonis, Daniela Giachetti. Singular parabolic problems with possibly changing sign data. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2047-2064. doi: 10.3934/dcdsb.2014.19.2047.
The state of fractional hereditary materials (FHM)
Luca Deseri, Massiliano Zingales and Pietro Pollaci
The widespread interest on the hereditary behavior of biological and bioinspired materials motivates deeper studies on their macroscopic ``minimal" state. The resulting integral equations for the detected relaxation and creep power-laws, of exponent $\beta$, are characterized by fractional operators. Here strains in $SBV_{loc}$ are considered to account for time-like jumps. Consistently, starting from stresses in $L_{loc}^{r}$, $r\in [1,\beta^{-1}], \, \, \beta\in(0,1)$ we reconstruct the corresponding strain by extending a result in [42]. The ``minimal" state is explored by showing that different histories delivering the same response are such that the fractional derivative of their difference is zero for all times. This equation is solved through a one-parameter family of strains whose related stresses converge to the response characterizing the original problem. This provides an approximation formula for the state variable, namely the residual stress associated to the difference of the histories above. Very little is known about the microstructural origins of the detected power-laws. Recent rheological models, based on a top-plate adhering and moving on functionally graded microstructures, allow for showing that the resultant of the underlying ``microstresses" matches the action recorded at the top-plate of such models, yielding a relationship between the macroscopic state and the ``microstresses".
Luca Deseri, Massiliano Zingales, Pietro Pollaci. The state of fractional hereditary materials (FHM). Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2065-2089. doi: 10.3934/dcdsb.2014.19.2065.
Fatigue accumulation in a thermo-visco-elastoplastic plate
Michela Eleuteri, Jana Kopfová and Pavel Krejčí
We consider a thermodynamic model for fatigue accumulation in an oscillating elastoplastic Kirchhoff plate based on the hypothesis that the fatigue accumulation rate is proportional to the plastic part of the dissipation rate. For the full model with periodic boundary conditions we prove existence of a solution in the whole time interval.
Michela Eleuteri, Jana Kopfov\u00E1, Pavel Krej\u010D\u00ED. Fatigue accumulation in a thermo-visco-elastoplastic plate. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2091-2109. doi: 10.3934/dcdsb.2014.19.2091.
Uniqueness and stability results for non-linear Johnson-Segalman viscoelasticity and related models
Franca Franchi, Barbara Lazzari and Roberta Nibbi
In this paper we have proved exponential asymptotic stability for the corotational incompressible diffusive Johnson-Segalman viscolelastic model and a simple decay result for the corotational incompressible hyperbolic Maxwell model. Moreover we have established continuous dependence and uniqueness results for the non-zero equilibrium solution.
In the compressible case, we have proved a Hölder continuous dependence theorem upon the initial data and body force for both models, whence follows a result of continuous dependence on the initial data and, therefore, uniqueness.
For the Johnson-Segalman model we have also dealt with the case of negative elastic viscosities, corresponding to retardation effects. A comparison with other type of viscoelasticity, showing short memory elastic effects, is given.
Franca Franchi, Barbara Lazzari, Roberta Nibbi. Uniqueness and stability results for non-linear Johnson-Segalman viscoelasticity and related models. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2111-2132. doi: 10.3934/dcdsb.2014.19.2111.
On the Green-Naghdi Type III heat conduction model
Claudio Giorgi, Diego Grandi and Vittorino Pata
In this work, we compare different constitutive models of heat flux in a rigid heat conductor. In particular, we investigate the relation between the solutions of the Green-Naghdi type III equation and those of the classical Fourier heat equation. The latter is often referred to as a limit case of the former one, as (formally) obtained by letting certain small positive parameter $\epsilon$ vanish. In presence of steady heat sources, we prove that the type III equation may be considered as a perturbation of the Fourier one only if the solutions are compared on a finite time interval of order $1/\epsilon$, whereas significant differences occur in the longterm. Moreover, for a bar with finite length and prescribed heat flux at its ends, the solutions to the type III equation do not converge asymptotically in time to the steady solutions to the corresponding Fourier model. This suggests that the Green-Naghdi type III theory is not to be viewed as comprehensive of the Fourier theory, at least when either asymptotic or stationary phenomena are involved.
Claudio Giorgi, Diego Grandi, Vittorino Pata. On the Green-Naghdi Type III heat conduction model. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2133-2143. doi: 10.3934/dcdsb.2014.19.2133.
Nonlinear free fall of one-dimensional rigid bodies in hyperviscous fluids
Giulio G. Giusteri, Alfredo Marzocchi and Alessandro Musesti
We consider the free fall of slender rigid bodies in a viscous incompressible fluid. We show that the dimensional reduction (DR), performed by substituting the slender bodies with one-dimensional rigid objects, together with a hyperviscous regularization (HR) of the Navier--Stokes equation for the three-dimensional fluid lead to a well-posed fluid-structure interaction problem. In contrast to what can be achieved within a classical framework, the hyperviscous term permits a sound definition of the viscous force acting on the one-dimensional immersed body. Those results show that the DR/HR procedure can be effectively employed for the mathematical modeling of the free fall problem in the slender-body limit.
Giulio G. Giusteri, Alfredo Marzocchi, Alessandro Musesti. Nonlinear free fall of one-dimensional rigid bodies in hyperviscous fluids. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2145-2157. doi: 10.3934/dcdsb.2014.19.2145.
Inverse problems for singular differential-operator equations with higher order polar singularities
Mohammed Al Horani and Angelo Favini
In this paper we study an inverse problem for strongly degenerate differential equations in Banach spaces. Projection method on suitable subspaces will be used to solve the given problem. A number of concrete applications to ordinary and partial differential equations is described.
Mohammed Al Horani, Angelo Favini. Inverse problems for singular differential-operator equations with higher order polar singularities. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2159-2168. doi: 10.3934/dcdsb.2014.19.2159.
Strain gradient theory of porous solids with initial stresses and initial heat flux
Dorin Ieşan
In this paper we present a strain gradient theory of thermoelastic porous solids with initial stresses and initial heat flux. First, we establish the equations governing the infinitesimal deformations superposed on large deformations. Then, we derive a linear theory of prestressed porous bodies with initial heat flux. The theory is capable to describe the deformation of chiral materials. A reciprocity relation and a uniqueness result with no definiteness assumption on the elastic constitutive coefficients are presented.
Dorin Ie\u015Fan. Strain gradient theory of porous solids with initial stresses and initial heat flux. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2169-2187. doi: 10.3934/dcdsb.2014.19.2169.
Second-sound phenomena in inviscid, thermally relaxing gases
Pedro M. Jordan
We consider the propagation of acoustic and thermal waves in a class of inviscid, thermally relaxing gases wherein the flow of heat is described by the Maxwell--Cattaneo law, i.e., in Cattaneo--Christov gases. After first considering the start-up piston problem under the linear theory, we then investigate traveling wave phenomena under the weakly-nonlinear approximation. In particular, a shock analysis is carried out, comparisons with predictions from classical gases dynamics theory are performed, and critical values of the parameters are derived. Special case results are also presented and connections to other fields are noted.
Pedro M. Jordan. Second-sound phenomena in inviscid, thermally relaxing gases. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2189-2205. doi: 10.3934/dcdsb.2014.19.2189.
Going to new lengths: Studying the Navier--Stokes-$\alpha\beta$ equations using the strained spiral vortex model
Tae-Yeon Kim, Xuemei Chen, John E. Dolbow and Eliot Fried
We study the effect of the length scales $\alpha$ and $\beta$ on the performance of the Navier--Stokes-$\alpha\beta$ equations for numerical simulations of turbulence over coarse discretizations. To this end, we rely on the strained spiral vortex model and take advantage of the dimensional reduction allowed by that model. In particular, the three-dimensional energy spectrum is reformulated so that it can be calculated from solutions of the two-dimensional unstrained Navier--Stokes-$\alpha\beta$ equations. A similarity theory for the spiral vortex model shows that the Navier--Stokes-$\alpha\beta$ model is better equipped than the Navier--Stokes-$\alpha$ model to capture smaller-scale behavior. Numerical experiments performed using a pseudo-spectral discretization along with the second-order Adams--Bashforth time-stepping algorithm yield results indicating that the fidelity of the energy spectrum in both the inertial and dissipation ranges is significantly improved for $\beta<\alpha$.
Tae-Yeon Kim, Xuemei Chen, John E. Dolbow, Eliot Fried. Going to new lengths: Studying the Navier--Stokes-$\\alpha\\beta$ equations using the strained spiral vortex model. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2207-2225. doi: 10.3934/dcdsb.2014.19.2207.
Analysis and simulation for an isotropic phase-field model describing grain growth
Maciek D. Korzec and Hao Wu
A phase-field system of coupled Allen--Cahn type PDEs describing grain growth is analyzed and simulated. In the periodic setting, we prove the existence and uniqueness of global weak solutions to the problem. Then we investigate the long-time behavior of the solutions within the theory of infinite-dimensional dissipative dynamical systems. Namely, the problem possesses a global attractor as well as an exponential attractor, which entails that the global attractor has finite fractal dimension. Moreover, we show that each trajectory converges to a single equilibrium. A time-adaptive numerical scheme based on trigonometric interpolation is presented. It allows to track the approximated long-time behavior accurately and leads to a convergence rate. The scheme exhibits a physically consistent discrete free energy dissipation.
Maciek D. Korzec, Hao Wu. Analysis and simulation for an isotropic phase-field model describing grain growth. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2227-2246. doi: 10.3934/dcdsb.2014.19.2227.
Identification problems related to cylindrical dielectrics **in presence of polarization**
Alfredo Lorenzi
We consider the problem of recovering a polarization kernel in an axially inhomogeneous cylindrical dielectric, the polarization depending on time and the axial variable, but being constant on each cross section of the cylinder.
For this problem, under some additional measurement, we prove an existence and uniqueness result.
Alfredo Lorenzi. Identification problems related to cylindrical dielectrics **in presence of polarization**. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2247-2265. doi: 10.3934/dcdsb.2014.19.2247.
On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$
Francesco Mainardi
We analyse some peculiar properties of the function of the Mittag-Leffler (M-L) type, $e_\alpha(t) := E_\alpha(-t^\alpha)$ for $0<\alpha<1$ and $t>0$, which is known to be completely monotone (CM) with a non-negative spectrum of frequencies and times, suitable to model fractional relaxation processes. We first note that (surprisingly) these two spectra coincide so providing a universal scaling property of this function, not well pointed out in the literature. Furthermore, we consider the problem of approximating our M-L function with simpler CM functions for small and large times. We provide two different sets of elementary CM functions that are asymptotically equivalent to $e_\alpha(t)$ as $t\to 0$ and $t\to +\infty$. The first set is given by the stretched exponential for small times and the power law for large times, following a standard approach. For the second set we chose two rational CM functions in $t^\alpha$, obtained as the Pad\`e Approximants (PA) $[0/1]$ to the convergent series in positive powers (as $t\to 0$) and to the asymptotic series in negative powers (as $t\to \infty$), respectively. From numerical computations we are allowed to the conjecture that the second set provides upper and lower bounds to the Mittag-Leffler function.
Francesco Mainardi. On some properties of the Mittag-Leffler function $\\mathbf{E_\\alpha(-t^\\alpha)}$, completely monotone for $\\mathbf{t> 0}$ with $\\mathbf{0<\\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2267-2278. doi: 10.3934/dcdsb.2014.19.2267.
Onset of convection in rotating porous layers via a new approach
Salvatore Rionero
Via a new approach, ternary fluid mixtures saturating rotating horizontal porous layers, heated from below and salted from above and below, are investigated. With or without the presence of Brinkman viscosity, the absence of subcritical instabilities is shown together with the coincidence of linear and non-linear global stability of the thermal conduction solution. The stability-instability conditions are found to be given by simple algebraic conditions in closed forms.
Salvatore Rionero. Onset of convection in rotating porous layers via a new approach. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2279-2296. doi: 10.3934/dcdsb.2014.19.2279.
q-Gaussian integrable Hamiltonian reductions in anisentropic gasdynamics
Colin Rogers and Tommaso Ruggeri
Integrable reductions in non-isothermal spatial gasdynamics are isolated corresponding to q-Gaussian density distributions. The availability of a Tsallis parameter q in the reductions permits the construction via a Madelung transformation of wave packet solutions of a class of associated q-logarithmic nonlinear Schrödinger equations involving a de Broglie-Bohm quantum potential term.
Colin Rogers, Tommaso Ruggeri. q-Gaussian integrable Hamiltonian reductions in anisentropic gasdynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2297-2312. doi: 10.3934/dcdsb.2014.19.2297.
Thermomechanics of hydrogen storage in metallic hydrides: Modeling and analysis
Tomáš Roubíček and Giuseppe Tomassetti
A thermodynamically consistent mathematical model for hydrogen adsorption in metal hydrides is proposed. Beside hydrogen diffusion, the model accounts for phase transformation accompanied by hysteresis, swelling, temperature and heat transfer, strain, and stress. We prove existence of solutions of the ensuing system of partial differential equations by a carefully-designed, semi-implicit approximation scheme. A generalization for a drift-diffusion of multi-component ionized ``gas'' is outlined, too.
Tom\u00E1\u0161 Roub\u00ED\u010Dek, Giuseppe Tomassetti. Thermomechanics of hydrogen storage in metallic hydrides: Modeling and analysis. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2313-2333. doi: 10.3934/dcdsb.2014.19.2313.
On the theory of viscoelasticity for materials with double porosity
Merab Svanadze
In this paper the linear theory of viscoelasticity for Kelvin-Voigt materials with double porosity is presented and the basic partial differential equations are derived. The system of these equations is based on the equations of motion, conservation of fluid mass, the effective stress concept and Darcy's law for materials with double porosity. This theory is a straightforward generalization of the earlier proposed dynamical theory of elasticity for materials with double porosity. The fundamental solution of the system of equations of steady vibrations is constructed by elementary functions and its basic properties are established. Finally, the properties of plane harmonic waves are studied. The results obtained from this study can be summarized as follows: through a Kelvin-Voigt material with double porosity three longitudinal and two transverse plane harmonic attenuated waves propagate.
Merab Svanadze. On the theory of viscoelasticity for materials with double porosity. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2335-2352. doi: 10.3934/dcdsb.2014.19.2335.
Mathematical study of the small oscillations of a floating body in a bounded tank containing an incompressible viscous liquid
Doretta Vivona and Pierre Capodanno
The authors study the small oscillations of a floating body in a bounded tank containing an incompressible viscous fluid.
Using the variational formulation of the problem, they obtain an operator equation from which they can study the spectrum of the problem.
The small motions are strongly and weakly damped aperiodic motions and, if the viscosity is sufficiently small, there is also at most finite number of damped oscillatory motions.
The authors give also an existence and uniqueness theorem for the solution of the associated evolution problem.
Doretta Vivona, Pierre Capodanno. Mathematical study of the small oscillations of a floating body in a boundedtank containing an incompressible viscous liquid. Discrete & Continuous Dynamical Systems - B, 2014, 19(7): 2353-2364. doi: 10.3934/dcdsb.2014.19.2353. | CommonCrawl |
a dot b
A Mathematics Website
Theory and worked examples
1. Equations and inequalities
8. Integration Techniques
11a. Vector basics
11b. Scalar and vector products
11c. Lines and planes
Randomly generated questions
System of linear equations
Inequalities 1
Curves and transformations 1
Sigma notation 1
Integration techniques 1-5
Differential equations 1
Complex numbers 1
Permutations and combinations 1
TYS Answers
Selected Questions 2007-2016
2007 H2 Math
Dong Jun
Samiya
Ethan Neoh
Ethan Ng
Lokesh
Nazura
Spivak: Calculus
Proofs of common number properties
I will name some properties I will use for future references with MP (my proposition) below:
(MP1.1) 0 times anything is 0
Let $a$ be any number. Let's start with $a \cdot 0$. By the property of $0$, we have $a \cdot 0 = a \cdot (0 + 0)$. Using the distributive property gives us $a \cdot 0 + a \cdot 0$. Combining it all together we have $a\cdot 0 = a\cdot 0 + a\cdot 0$. But the definition of $0$ is such that it doesn't change a number after addition. So $a \cdot 0 = 0$.
(MP1.2) $(-a)b = – (ab)$
Note the meaning of $-a$ is the additive inverse of $a$, meaning that $a + (-a) = 0$ while the meaning of $-(ab)$ is the additive inverse of $ab$. Let's add $ab$ with $(-a)b$.
$ab + (-a)b = (a + (-a)) \cdot b$ by the commutative and distributive laws. Since $-a$ is the additive inverse of $a$, this gives us $0 \cdot b$ which is $0$ from MP1.1. Thus $(-a)b$ is the additive inverse of $ab$ which is what we want to prove.
(MP1.3) $(-a)(-b) = ab$
$ (-a)(-b) + -(ab) = (-a)(-b) + (-a)(b)$ by the MP1.2. By the distributive property, $(-a)(-b) + (-a)b = (-a)(-b + b) = (-a)0 = 0$ by MP1.1. Since $-(ab)$ is the additive inverse of $ab$, $(-a)(-b)$ must have been $ab$.
We cannot have the multiplicative inverse of 0
Suppose there exist the multiplicative inverse of $0$, which we will denote by $0^{-1}$. Then $0 \cdot 0^{-1} = 1$. But by MP1.1, $0$ times anything is 0 so $0=1$. This brings us a whole host of problems: $1 + 1 = 0 + 0 = 0$. So no matter what we do, every number we try to create will all become 0. Hence allowing the multiplicative inverse of 0 means that we can only work with one number: not a very interesting proposition. Conversely, to have more than one number to work with, we cannot allow the multiplicative inverse of 0.
(MP1.4) Corollory of $(-a)b = -(ab): (-1)b = -b$
(MP1.5) If $ab=0$, then $a=0$ or $b=0$.
Case 1: $a=0$. That is part of the solution.
Case 2: $a \neq 0$. Then the multiplicative inverse $a^{-1}$ exists. So $ab= 0$ means $a^{-1} ab = a^{-1} 0 $ so $b=0$.
(MP1.6) $(ab)^{-1} = a^{-1}b^{-1}$
$ab (a^{-1}b^{-1}) = a a^{-1} b b^{-1}$ by the commutative and associative properties which gets us $1 \cdot 1 = 1$ by the property of the multiplicative identity $1$. Hence $a^{-1}b^{-1}$ is the multiplicative inverse of $ab$.
Chapter 1: Numbers, operations and axioms
Numbers are our first exposure to mathematics: yet what exactly a "number" is an interesting discussion I've had in philosophy classes. At least for "whole numbers", we typically find them pretty intuitive and it doesn't take a child much to be comfortable with numbers like "2" and "3" and learning how to operate on them via addition, "$2+3=5$" and multiplication "$2 \cdot 3 = 6$". The fact that there is a "two-ness" behind 2 cows, 2 dollars and 2 books is pretty remarkable we slow down and think about it. But that is a discussion for another day. Let us now look a few common sets of numbers we typically encounter:
The natural numbers $\mathbb{N} = \{0, 1, 2, 3, \ldots \}$
The integers $\mathbb{Z} = \{ \ldots, -2, -1, 0, 1, 2, \dots\}$
The rational numbers $\mathbb{Q}$, numbers that can be expressed as $\frac{a}{b}$ where $a,b\in\mathbb{Z}, b \neq 0$
The real numbers: visualized as a number line, including all the rational numbers and irrational numbers (e.g. $\sqrt{2}, \pi, e)$.
The fact that these set of numbers are useful to work with are encapsulated by the rules we want our operations, addition $+$ and multiplication $\cdot$ to follow and the axioms we want.
Operations and axioms
To model how we think about numbers, addition and multiplication and how it works "in real life", we want them to follow certain rules. First (closure), for any numbers $a$ and $b$, we want $a+b$ and $a \cdot b$ to be numbers to. Also, these following should come as no surprise
Associativity: we want the order of repeated operations not to be important. $a + (b+c) = (a+c) + c, a \cdot (b \cdot c) = (a \cdot b) \cdot c$.
Commutativity: we want the order of operations not to matter. $a + b = b + a, a \cdot b = b \cdot a$
Distributive law: we want to know how addition and multiplication interacts. $a \cdot (b+c) = a \cdot b + a \cdot c$.
As an aside, commutative for multiplication is often not required when we study more abstract systems such as matrices in the study of abstract algebra. For our usual numbers that isn't a concern. The numbers $0$ and $1$ plays a special part in addition and multiplication respectively: they play the part as an "identity" element: $a + 0 = a$ and $a \cdot 1 = a$. Just the natural numbers satisfy all of the above axioms. Along with mathematical induction (chapter 2, which is implicit in the way the natural numbers are defined) these bring about the rich field of number theory. But things get more interesting (and allow for our study of calculus) when we require an "inverse" element, essentially setting up the field for the opposite operations of subtraction and division.
Existence of additive inverse: for every number $a$, we have a number ($-a$) such that $a + (-a) = 0$.
Existence of multiplicative inverse: for every number $a$, $a\neq 0$ we have a number ($a^{-1}$) such that $a \cdot a^{-1} = 1$.
Additive inverse brings about the integers, and multiplicative inverses bring about the rational numbers. The fact that 0 can not have an inverse will be investigated in a following post. Finally, we bring about the idea of "ordering" the numbers using the concept of an inequality. For any two numbers $a$ and $b$, one and only one of the following holds:
$a=b,$
$a < b,$ or
$b < a$.
We want the inequality to have the following rules:
If $ a < b$ and $b < c$, then $a < c$
If $a < b$, then for any $c$, $a+c < b+c$
If $a < b$ and $0 < c$, then $ac < bc$
Just axioms alone (along with mathematical induction in Chapter 2) can lead us to a whole bunch of familiar techniques we have already internalized. We will explore them in solving some exercises in a subsequent post. We also note that just the rational numbers alone are sufficient to satisfy all the above axioms. But very quickly we realize that will preclude a solution to something "simple" like $x^2 = 2$. A proof that $\sqrt{2}$ is rational is presented in many other places: I will refer readers to Google if they have not seen it before. $x^2 = 2$ comes up pretty naturally from the study of geometry (the length of the hypotenuse in an isosceles right angled triangle with two sides of length 1) so real numbers are required for the study of subjects where we need a "measure". The construction of the reals is something I'm looking forward to at the end of the book.
Modulus and the triangle inequality
The modulus/absolute value function (defined by $|a| = a$ if $a$ is positive or 0, and $|a| = -a$ if $a$ is negative), along with the triangle inequality $|a + b| \leq |a| + |b|$ comes up all the time in subsequent work so I will just mention that at the end of this post. The proof can be done by simply working through all the cases where $a$ and $b$ take different signs.
My introduction to Spivak: Calculus
"Analysis" is one of the major fields of mathematics and the path a typical student of mathematics goes through in this field goes something like this:
"Calculus": This is where we are more concerned about the techniques of differentiation and integration. In the American context this is typically done in AP classes + at lower level undergraduate courses. In Singapore we start this journey in Additional Mathematics at the "O" levels where differentiation (techniques and applications) are covered extensively with a brief introduction into integration techniques. At the "A" levels we delve deeper into further integration techniques and start touching on some interesting applications of calculus through areas and volumes, the Maclaurin series and differential equations. At university this is usually taken further through an introduction to limits and further courses on differential equations (both ordinary and partial). This is also where many math-adjacent courses (engineering and sciences) end in this endeavor.
"Analysis": The concept of proofs is where the (pure) mathematics syllabus start to deviate from their applied mathematics, science and engineering contemporaries. And this is where I feel we start moving from "Calculus" into "Analysis", where the proof of equations and theorems start to take more importance compared to the use and application of them. In a sense, we re-learn what we have started taken for granted in our earlier study and place them on solid foundation. Analysis I typically takes a student through limits and the epsilon-delta formulation, sequences and differentiation, Analysis II goes into the (Riemann) integral while Analysis III goes into the link between the two: The Fundamental Theorem of Calculus. With interesting detours along the way (like the Taylor series) in between.
"Measure Theory and the Lebesgue integral": At the upper undergraduate and beginning graduate level we move on from the Riemann integral to the Lebesgue integral, with a whole big idea of measure theory supporting the approach. This leads to all sorts of related topics such as topology, functional analysis, probability theory and more!
As I went through my own path (and coming from a engineering background to graduate work in math) I am often in awe of this progression of ideas. Unfortunately, the pace of school work, having to complete each course within 10-12 weeks with a final examination at the end means I sometimes did not have the time to fully appreciate some of the ideas and hardly took any interesting looking detours. Having needed a few courses around measure theory before I truly understood and appreciated it, I feel I could benefit from a slower but deeper delve into the more "basic" analysis portion of my study.
I've read many good reviews of Spivak's Calculus (the book title being a bit of a misnomer: it is definitely a book aimed at the "analysis" part rather than the "calculus" part as I've described above) and am eager to give it a whirl. Chapters I'm especially looking forward to include the proof that $\pi$ is irrational, $e$ is transcendental, how he defines the logarithm and exponential function and the construction of the real numbers (plus a proof in its uniqueness). As I work through the book I'm looking to use this as a blog to aid my understanding of the material.
I'm using the third edition, though I understand the 4th edition is out (amazon link to the third edition: cover photo credits). Let's enjoy this journey together then!
2019 H2 Math Paper 1 Question 9 Part (ib): the trigonometric approach
Still climbing onto the shoulders of giants
Alternative method to find food of perpendicular (without using vector projections)
Q9b discussion
Foundations of Arithmetic
Integration Applications
Math Discussion
Copyright © 2023 Kelvin Soh.All rights reserved. | CommonCrawl |
Molecules and Cells
Korean Society for Molecular and Cellular Biology (한국분자세포생물학회)
Life Science > Molecular Cell Biology
Molecules and Cells (Mol. Cells) is an international on-line open-access journal devoted to the advancement and dissemination of fundamental knowledge in molecular and cellular biology. Reports on a broad range of topics of general interest to molecular and cell biologists are published. The journal will not publish papers that simply report cloning and sequencing of a gene or preliminary X-ray crystallographic analysis without providing evidences for further biological significance. It is published monthly by the Korean Society for Molecular and Cellular Biology (KSMCB).
http://www.ksmcb.or.kr/submission/Login.html KSCI SCI SCIE
Dopamine Receptor Interacting Proteins (DRIPs) of Dopamine D1-like Receptors in the Central Nervous System
Wang, Min;Lee, Frank J.S.;Liu, Fang 149
Dopamine is a major neurotransmitter in the mammalian central nervous system (CNS) that regulates neuroendocrine functions, locomotor activity, cognition and emotion. The dopamine system has been extensively studied because dysfunction of this system is linked to various pathological conditions including Parkinson's disease, schizophrenia, Tourette's syndrome, and drug addiction. Accordingly, intense efforts to delineate the full complement of signaling pathways mediated by individual receptor subtypes have been pursued. Dopamine D1-like receptors are of particular interest because they are the most abundant dopamine receptors in CNS. Recent work suggests that dopamine signaling could be regulated via dopamine receptor interacting proteins (DRIPs). Unraveling these DRIPs involved in the dopamine system may provide a better understanding of the mechanisms underlying CNS disorders related to dopamine system dysfunction and may help identify novel therapeutic targets.
Chloroplastic NAD(P)H Dehydrogenase Complex and Cyclic Electron Transport around Photosystem I
Endo, Tsuyoshi;Ishida, Satoshi;Ishikawa, Noriko;Sato, Fumihiko 158
Recent molecular genetics studies have revealed that cyclic electron transport around photosystem I is essential for normal photosynthesis and growth of plants. Chloroplastic NAD(P)H dehydorgenase (NDH) complex, a homologue of the complex I in respiratory electron transport, is involved in one of two cyclic pathways. Recent studies on the function and structure of the NDH complex are reviewed.
Genetic Diversity among Korean Bermudagrass (Cynodon spp.) Ecotypes Characterized by Morphological, Cytological and Molecular Approaches
Kang, Si-Yong;Lee, Geung-Joo;Lim, Ki Byung;Lee, Hye Jung;Park, In Sook;Chung, Sung Jin;Kim, Jin-Baek;Kim, Dong Sub;Rhee, Hye Kyung 163
The genus Cynodon comprises ten species. The objective of this study was to evaluate the genetic diversity of Korean bermudagrasses at the morphological, cytological and molecular levels. Morphological parameters, the nuclear DNA content and ploidy levels were observed in 43 bermudagrass ecotypes. AFLP markers were evaluated to define the genetic diversity, and chromosome counts were made to confirm the inferred cytotypes. Nuclear DNA contents were in the ranges 1.42-1.56, 1.94-2.19, 2.54, and 2.77-2.85 pg/2C for the triploid, tetraploid, pentaploid, and hexaploid accessions, respectively. The inferred cytotypes were triploid (2n = 3x = 27), tetraploid (2n = 4x = 36), pentaploid (2n = 5x = 45), and hexaploid (2n = 6x = 54), but the majority of the collections were tetraploid (81%). Mitotic chromosome counts verified the corresponding ploidy levels. The fast growing fine-textured ecotypes had lower ploidy levels, while the pentaploids and hexaploids were coarse types. The genetic similarity ranged from 0.42 to 0.94 with an average of 0.64. UPGMA cluster analysis and principle coordinate analysis separated the ecotypes into 6 distinct groups. The genetic similarity suggests natural hybridization between the different cytotypes, which could be useful resources for future breeding and genetic studies.
Functional Equivalence of Translation Factor elF5B from Candida albicans and Saccharomyces cerevisiae
Jun, Kyung Ok;Yang, Eun Ji;Lee, Byeong Jeong;Park, Jeong Ro;Lee, Joon H.;Choi, Sang Ki 172
Eukaryotic translation initiation factor 5B (eIF5B) plays a role in recognition of the AUG codon in conjunction with translation factor eIF2, and promotes joining of the 60S ribosomal subunit. To see whether the eIF5B proteins of other organisms function in Saccharomyces cerevisiae, we cloned the corresponding genes from Oryza sativa, Arabidopsis thaliana, Aspergillus nidulans and Candida albican and expressed them under the control of the galactose-inducible GAL promoter in the $fun12{\Delta}$ strain of Saccharomyces cerevisiae. Expression of Candida albicans eIF5B complemented the slow-growth phenotype of the $fun12{\Delta}$ strain, but that of Aspergillus nidulance did not, despite the fact that its protein was expressed better than that of Candida albicans. The Arabidopsis thaliana protein was also not functional in Saccharomyces. These results reveal that the eIF5B in Candida albicans has a close functional relationship with that of Sacharomyces cerevisiae, as also shown by a phylogenetic analysis based on the amino acid sequences of the eIF5Bs.
Molecular Changes in Remote Tissues Induced by Electro-Acupuncture Stimulation at Acupoint ST36
Rho, Sam-Woong;Choi, Gi-Soon;Ko, Eun-Jung;Kim, Sun-Kwang;Lee, Young-Seop;Lee, Hye-Jung;Hong, Moo-Chang;Shin, Min-Kyu;Min, Byung-Il;Kee, Hyun-Jung;Lee, Cheol-Koo;Bae, Hyun-Su 178
To investigate the effects of electro-acupuncture (EA) treatment on regions remote from the application, we measured cellular, enzymatic, and transcriptional activities in various internal tissues of healthy rats. The EA was applied to the well-identified acupoint ST36 of the leg. After application, we measured the activity of natural killer cells in the spleen, gene expression in the hypothalamus, and the activities of antioxidative enzymes in the hypothalamus, liver and red blood cells. The EA treatment increased natural killer cell activity in the spleen by approximately 44%. It also induced genes related to pain, including 5-Hydroxytryptamine (serotonin) receptor 3a (Htr3a) and Endothelin receptor type B (Ednrb) in the hypothalamus, and increased the activity of superoxide dismutase in the hypothalamus, liver, and red blood cells. These findings indicate that EA mediates its effects through changes in cellular activity, gene expression, and enzymatic activity in multiple remote tissues. The sum of these alterations may explain the beneficial effects of EA.
C-FLIP Promotes the Motility of Cancer Cells by Activating FAK and ERK, and Increasing MMP-9 Expression
Park, Deokbum;Shim, Eunsook;Kim, Youngmi;Kim, Young Myeong;Lee, Hansoo;Choe, Jongseon;Kang, Dongmin;Lee, Yun-Sil;Jeoung, Dooil 184
We examined the role of c-FLIP in the motility of HeLa cells. A small interfering RNA (siRNA) directed against c-FLIP inhibited the adhesion and motility of the cells without affecting their growth rate. The long form of c-FLIP ($c-FLIP_L$), but not the short form ($c-FLIP_S$), enhanced adhesion and motility. Downregulation of $c-FLIP_L$ with siRNA decreased phosphorylation of FAK and ERK, while overexpression of $c-FLIP_L$ increased their phosphorylation. Overexpression of FAK activated ERK, and enhanced the motility of HeLa cells. FRNK, an inhibitory fragment of FAK, inhibited ERK and decreased motility. Inhibition of ERK also significantly suppressed $c-FLIP_L$-promoted motility. Inhibition of ROCK by Y27632 suppressed the $c-FLIP_L$-promoted motility by reducing phosphorylation of FAK and ERK. Overexpression of $c-FLIP_L$ increased the expression and secretion of MMP-9, and inhibition of MMP-9 by Ilomastat reduced $c-FLIP_L$- promoted cell motility. A caspase-like domain (amino acids 222-376) was found to be necessary for the $c-FLIP_L$-promoted cell motility. We conclude that $c-FLIP_L$ promotes the motility of HeLa cells by activating FAK and ERK, and increasing MMP-9 expression.
Marker Production by PCR Amplification with Primer Pairs from Conserved Sequences of WRKY Genes in Chili Pepper
Kim, Hyoun-Joung;Lee, Heung-Ryul;Han, Jung-Heon;Yeom, Seon-In;Harn, Chee-Hark;Kim, Byung-Dong 196
Despite increasing awareness of the importance of WRKY genes in plant defense signaling, the locations of these genes in the Capsicum genome have not been established. To develop WRKY-based markers, primer sequences were deduced from the conserved sequences of the DNA binding motif within the WRKY domains of tomato and pepper genes. These primers were derived from upstream and downstream parts of the conserved sequences of the three WRKY groups. Six primer combinations of each WRKY group were tested for polymorphisms between the mapping parents, C. annuum 'CM334' and C. annuum 'Chilsung-cho'. DNA fragments amplified by primer pairs deduced from WRKY Group II genes revealed high levels of polymorphism. Using 32 primer pairs to amplify upstream and downstream parts of the WRKY domain of WRKY group II genes, 60 polymorphic bands were detected. Polymorphisms were not detected with primer pairs from downstream parts of WRKY group II genes. Half of these primers were subjected to $F_2$ genotyping to construct a linkage map. Thirty of 41 markers were located evenly spaced on 20 of the 28 linkage groups, without clustering. This linkage map also consisted of 199 AFLP and 26 SSR markers. This WRKY-based marker system is a rapid and simple method for generating sequence-specific markers for plant gene families.
Development of a Sequence Characteristic Amplified Region Marker linked to the L4 Locus Conferring Broad Spectrum Resistance to Tobamoviruses in Pepper Plants
Kim, Hyun Jung;Han, Jung-Heon;Yoo, Jae Hyoung;Cho, Hwa Jin;Kim, Byung-Dong 205
To develop molecular markers linked to the $L^4$ locus conferring resistance to tobamovirus pathotypes in pepper plants, we performed AFLP with 512 primer combinations for susceptible (S pool) and resistant (R pool) DNA bulks against pathotype 1.2 of pepper mild mottle virus. Each bulk was made by pooling the DNA of five homozygous individuals from a T10 population, which was a near-isogenic $BC_4F_2$ generation for the $L^4$ locus. A total of 19 primer pairs produced scorable bands in the R pool. Further screening with these primer pairs was done on DNA bulks from T102, a $BC_{10}F_2$ derived from T10 by back crossing. Three AFLP markers were finally selected and designated L4-a, L4-b and L4-c. L4-a and L4-c each underwent one recombination event, whereas no recombination for L4-b was seen in 20 individuals of each DNA bulk. Linkage analysis of these markers in 112 $F_2$ T102 individuals showed that they were each within 2.5 cM of the $L^4$ locus. L4-b was successfully converted into a simple 340-bp SCAR marker, designated L4SC340, which mapped 1.8 cM from the $L^4$ locus in T102 and 0.9 cM in another $BC_{10}F_2$ population, T101. We believe that this newly characterized marker will improve selection of tobamovirus resistance in pepper plants by reducing breeding cost and time.
Heat Stress Causes Aberrant DNA Methylation of H19 and lgf-2r in Mouse Blastocysts
Zhu, Jia-Qiao;Liu, Jing-He;Liang, Xing-Wei;Xu, Bao-Zeng;Hou, Yi;Zhao, Xing-Xu;Sun, Qing-Yuan 211
To gain a better understanding of the methylation imprinting changes associated with heat stress in early development, we used bisulfite sequencing and bisulfite restriction analysis to examine the DNA methylation status of imprinted genes in early embryos (blastocysts). The paternal imprinted genes, H19 and Igf-2r, had lower methylation levels in heat-stressed embryos than in control embryos, whereas the maternal imprinted genes, Peg3 and Peg1, had similar methylation pattern in heat-stressed embryos and in control embryos. Our results indicate that heat stress may induce aberrant methylation imprinting, which results in developmental failure of mouse embryos, and that the effects of heat shock on methylation imprinting may be gene-specific.
Bone Marrow-derived Side Population Cells are Capable of Functional Cardiomyogenic Differentiation
Yoon, Jihyun;Choi, Seung-Cheol;Park, Chi-Yeon;Choi, Ji-Hyun;Kim, Yang-In;Shim, Wan-Joo;Lim, Do-Sun 216
It has been reported that bone marrow (BM)-side population (SP) cells, with hematopoietic stem cell activity, can transdifferentiate into cardiomyocytes and contribute to myocardial repair. However, this has been questioned by recent studies showing that hematopoietic stem cells (HSCs) adopt a hematopoietic cell lineage in the ischemic myocardium. The present study was designed to investigate whether BM-SP cells can in fact transdifferentiate into functional cardiomyocytes. Phenotypically, BM-SP cells were $19.59%{\pm}9.00\;CD14^+$, $8.22%{\pm}2.72\;CD34^+$, $92.93%{\pm}2.68\;CD44^+$, $91.86%{\pm}4.07\;CD45^+$, $28.48%{\pm}2.24\;c-kit^+$, $71.09%{\pm}3.67\;Sca-1^+$. Expression of endothelial cell markers (CD31, Flk-1, Tie-2 and VEGF-A) was higher in BM-SP cells than whole BM cells. After five days of co-culture with neonatal cardiomyocytes, $7.2%{\pm}1.2$ of the BM-SP cells expressed sarcomeric ${\alpha}$-actinin as measured by flow cytometry. Moreover, BM-SP cells co-cultured on neonatal cardiomyocytes fixed to inhibit cell fusion also expressed sarcomeric ${\alpha}$-actinin. The co-cultured BM-SP cells showed neonatal cardiomyocyte-like action potentials of relatively long duration and shallow resting membrane potential. They also generated calcium transients with amplitude and duration similar to those of neonatal cardiomyocytes. These results show that BM-SP cells are capable of functional cardiomyogenic differentiation when co-cultured with neonatal cardiomyocytes.
N-Acetylphytosphingosine Enhances the Radiosensitivity of Lung Cancer Cell Line NCI-H460
Han, Youngsoo;Kim, Kisung;Shim, Ji-Young;Park, Changsoe;Song, Jie-Young;Yun, Yeon-Sook 224
Ceramides are well-known second messengers that induce apoptosis in various kinds of cancer cells, and their effects are closely related to radiation sensitivity. Phytoceramides, the yeast counterparts of the mammalian ceramides, are also reported to induce apoptosis. We investigated the effect of a novel ceramide derivative, N-acetylphytosphingosine (NAPS), on the radiosensitivity of NCI-H460 human lung carcinoma cells and its differential cytotoxicity in tumor and normal cells. The combination of NAPS with radiation significantly increased clonogenic cell death and caspase-dependent apoptosis. The combined treatment greatly increased Bax expression and Bid cleavage, but not Bcl-2 expression. However, there was no effect on radiosensitivity and apoptosis in BEAS2B cells, which derive from normal human bronchial epithelium. Cell proliferation and DNA synthesis were significantly inhibited by NAPS in both NCI-H460 and BEAS2B cells, but only the BEAS2B cells recovered by 48h after removal of the NAPS. Furthermore, the NCI-H460 cells underwent more DNA fragmentation than the BEAS2B cells in response to NAPS. Our results indicate that NAPS may be a potential radiosensitizing agent with differential effects on tumor vs. normal cells.
Metabolic Engineering of Indole Glucosinolates in Chinese Cabbage Plants by Expression of Arabidopsis CYP79B2, CYP79B3, and CYP83B1
Zang, Yun-Xiang;Lim, Myung-Ho;Park, Beom-Seok;Hong, Seung-Beom;Kim, Doo Hwan 231
Indole glucosinolates (IG) play important roles in plant defense, plant-insect interactions, and stress responses in plants. In an attempt to metabolically engineer the IG pathway flux in Chinese cabbage, three important Arabidopsis cDNAs, CYP79B2, CYP79B3, and CYP83B1, were introduced into Chinese cabbage by Agrobacterium-mediated transformation. Overexpression of CYP79B3 or CYP83B1 did not affect IG accumulation levels, and overexpression of CYP79B2 or CYP79B3 prevented the transformed callus from being regenerated, displaying the phenotype of indole-3-acetic acid (IAA) overproduction. However, when CYP83B1 was overexpressed together with CYP79B2 and/or CYP79B3, the transformed calli were regenerated into whole plants that accumulated higher levels of glucobrassicin, 4-hydroxy glucobrassicin, and 4-methoxy glucobrassicin than wild-type controls. This result suggests that the flux in Chinese cabbage is predominantly channeled into IAA biosynthesis so that coordinate expression of the two consecutive enzymes is needed to divert the flux into IG biosynthesis. With regard to IG accumulation, overexpression of all three cDNAs was no better than overexpression of the two cDNAs. The content of neoglucobrassicin remained unchanged in all transgenic plants. Although glucobrassicin was most directly affected by overexpression of the transgenes, elevated levels of the parent IG, glucobrassicin, were not always accompanied by increases in 4-hydroxy and 4-methoxy glucobrassicin. However, one transgenic line producing about 8-fold increased glucobrassicin also accumulated at least 2.5 fold more 4-hydroxy and 4-methoxy glucobrassicin. This implies that a large glucobrassicin pool exceeding some threshold level drives the flux into the side chain modification pathway. Aliphatic glucosinolate content was not affected in any of the transgenic plants.
Attenuated Neuropathic Pain in CaV3.1 Null Mice
Na, Heung Sik;Choi, Soonwook;Kim, Junesun;Park, Joonoh;Shin, Hee-Sup 242
To assess the role of $\alpha_{1G}$ T-type $Ca^{2+}$ channels in neuropathic pain after L5 spinal nerve ligation, we examined behavioral pain susceptibility in mice lacking $Ca_{V}3.1$ (${\alpha}_{1G}{^{-/-}}$), the gene encoding the pore-forming units of these channels. Reduced spontaneous pain responses and an increased threshold for paw withdrawal in response to mechanical stimulation were observed in these mice. The ${{\alpha}_{1G}}^{-/-}$ mice also showed attenuated thermal hyperalgesia in response to both low-(IR30) and high-intensity (IR60) infrared stimulation. Our results reveal the importance of ${\alpha}_{1G}$ T-type $Ca^{2+}$ channels in the development of neuropathic pain, and suggest that selective modulation of ${\alpha}_{1G}$ subtype channels may provide a novel approach to the treatment of allodynia and hyperalgesia.
Accumulation of Flavonols in Response to Ultraviolet-B Irradiation in Soybean Is Related to Induction of Flavanone 3-β-Hydroxylase and Flavonol Synthase
Kim, Bong Gyu;Kim, Jeong Ho;Kim, Jiyoung;Lee, Choonghwan;Ahn, Joong-Hoon 247
There are several branch points in the flavonoid synthesis pathway starting from chalcone. Among them, the hydroxylation of flavanone is a key step leading to flavonol and anthocyanin. The flavanone 3-${\beta}$-hydroxylase (GmF3H) gene was cloned from soybean (Glycine max cultivar Sinpaldal) and shown to convert eriodictyol and naringenin into taxifolin and dihydrokaempferol, respectively. The major flavonoids in this soybean cultivar were found by LC-MS/MS to be kamepferol O-triglycosides and O-diglycosides. Expression of GmF3H and flavonol synthase (GmFLS) was induced by ultraviolet-B (UV-B) irradiation and their expression stimulated accumulation of kaempferol glycones. Thus, GmF3H and GmFLS appear to be key enzymes in the biosynthesis of the UV-protectant, kaempferol.
Acrolein with an α,β-unsaturated Carbonyl Group Inhibits LPS-induced Homodimerization of Toll-like Receptor 4
Lee, Jeon-Soo;Lee, Joo Young;Lee, Mi Young;Hwang, Daniel H.;Youn, Hyung Sun 253
Acrolein is a highly electrophilic ${\alpha},{\beta}$-unsaturated aldehyde present in a number of environmental sources, especially cigarette smoke. It reacts strongly with the thiol groups of cysteine residues by Michael addition and has been reported to inhibit nuclear $factor-{\kappa}B$ ($NF-{\kappa}B$) activation by lipopolysaccharide (LPS). The mechanism by which it inhibits $NF-{\kappa}B$ is not clear. Toll-like receptors (TLRs) play a key role in sensing microbial components and inducing innate immune responses, and LPS-induced dimerization of TLR4 is required for activation of downstream signaling pathways. Thus, dimerization of TLR4 may be one of the first events involved in activating TLR4-mediated signaling pathways. Stimulation of TLR4 by LPS activates both myeloid differential factor 88 (MyD88)- and TIR domain-containing adapter inducing $IFN{\beta}$ (TRIF)-dependent signaling pathways leading to activation of $NF-{\kappa}B$ and IFN-regulatory factor 3 (IRF3). Acrolein inhibited $NF-{\kappa}B$ and IRF3 activation by LPS, but it did not inhibit $NF-{\kappa}B$ or IRF3 activation by MyD88, inhibitor ${\kappa}B$ kinase $(IKK){\beta}$, TRIF, or TNF-receptor-associated factor family member-associated $NF-{\kappa}B$ activator (TANK)-binding kinase 1 (TBK1). Acrolein inhibited LPS-induced dimerization of TLR4, which resulted in the down-regulation of $NF-{\kappa}B$ and IRF3 activation. These results suggest that activation of TLRs and subsequent immune/inflammatory responses induced by endogenous molecules or chronic infection can be modulated by certain chemicals with a structural motif that enables Michael addition.
Expressed Sequence Tag Analysis of Antarctic Hairgrass Deschampsia antarctica from King George Island, Antarctica
Lee, Hyoungseok;Cho, Hyun Hee;Kim, Il-Chan;Yim, Joung Han;Lee, Hong Kum;Lee, Yoo Kyung 258
Deschampsia antarctica is the only monocot that thrives in the tough conditions of the Antarctic region. It is an invaluable resource for the identification of genes associated with tolerance to various environmental pressures. In order to identify genes that are differentially regulated between greenhouse-grown and Antarctic field-grown plants, we initiated a detailed gene expression analysis. Antarctic plants were collected and greenhouse plants served as controls. Two different cDNA libraries were constructed with these plants. A total of 2,112 cDNA clones was sequenced and grouped into 1,199 unigene clusters consisting of 243 consensus and 956 singleton sequences. Using similarity searches against several public databases, we constructed a functional classification of the ESTs into categories such as genes related to responses to stimuli, as well as photosynthesis and metabolism. Real-time PCR analysis of various stress responsive genes revealed different patterns of regulation in the different environments, suggesting that these genes are involved in responses to specific environmental factors.
Identification and Characterization of Single Nucleotide Polymorphisms of SLC22A11 (hOAT4) in Korean Women Osteoporosis Patients
Lee, Woon Kyu;Kwak, Jin Oh;Hwang, Ji-Sun;Suh, Chang Kook;Cha, Seok Ho 265
Single nucleotide polymorphisms (SNPs) are the most common form of human genetic variation. Non-synonymous SNPs (nsSNPs) change an amino acid. Organic anion transporters (OATs) play an important role in eliminating or reabsorbing endogenous and exogenous organic anionic compounds. Among OATs, hOAT4 mediates high affinity transport of estrone sulfate and dehydroepiandrosterone sulfate. The rapid bone loss that occurs in post-menopausal women is mainly due to a net decrease of estrogen. In the present study we searched for SNPs within the exon regions of hOAT4 in Korean women osteoporosis patients. Fifty healthy subjects and 50 subjects with osteoporosis were screened for genetic polymorphism in the coding region of SLC22A11 (hOAT4) using GC-clamp PCR and denaturing gradient gel electrophoresis (DGGE). We found three SNPs in the hOAT4 gene. Two were in the osteoporosis group (C483A and G832A) and one in the normal group (C847T). One of the SNPs, G832A, is an nsSNP that changes the $278^{th}$ amino acid from glutamic acid to lysine (E278K). Uptake of [$3^H$] estrone sulfate by oocytes injected with the hOAT4 E278K mutant was reduced compared with wild-type hOAT4. Km values for wild type and E278K were $0.7{\mu}M$ and $1.2{\mu}M$, and Vmax values were 1.8 and 0.47 pmol/oocyte/h, respectively. The present study demonstrates that hOAT4 variants can causing inter-individual variation in anionic drug uptake and, therefore, could be used as markers for certain diseases including osteoporosis.
Growth Retardation and Death of Rice Plants Irradiated with Carbon Ion Beams Is Preceded by Very Early Dose- and Time-dependent Gene Expression Changes
Rakwal, Randeep;Kimura, Shinzo;Shibato, Junko;Nojima, Kumie;Kim, Yeon-Ki;Nahm, Baek Hie;Jwa, Nam-Soo;Endo, Satoru;Tanaka, Kenichi;Iwahashi, Hitoshi 272
The carbon-ion beam (CIB) generated by the heavy-ion medical accelerator in Chiba (HIMAC) was targeted to 7-day-old rice. Physiological parameters such as growth, and gene expression profiles were examined immediately after CIB irradiation. Dose-dependent growth suppression was seen three days post-irradiation (PI), and all the irradiated plants died by 15 days PI. Microarray (Agilent rice 22K) analysis of the plants immediately after irradiation (iai) revealed effects on gene expression at 270 Gy; 353 genes were up-regulated and 87 down-regulated. Exactly the same set of genes was affected at 90 Gy. Among the highly induced genes were genes involved in information storage and processing, cellular processes and signaling, and metabolism. RT-PCR analysis confirmed the microarray data.
Clustering Approaches to Identifying Gene Expression Patterns from DNA Microarray Data
Do, Jin Hwan;Choi, Dong-Kug 279
The analysis of microarray data is essential for large amounts of gene expression data. In this review we focus on clustering techniques. The biological rationale for this approach is the fact that many co-expressed genes are co-regulated, and identifying co-expressed genes could aid in functional annotation of novel genes, de novo identification of transcription factor binding sites and elucidation of complex biological pathways. Co-expressed genes are usually identified in microarray experiments by clustering techniques. There are many such methods, and the results obtained even for the same datasets may vary considerably depending on the algorithms and metrics for dissimilarity measures used, as well as on user-selectable parameters such as desired number of clusters and initial values. Therefore, biologists who want to interpret microarray data should be aware of the weakness and strengths of the clustering methods used. In this review, we survey the basic principles of clustering of DNA microarray data from crisp clustering algorithms such as hierarchical clustering, K-means and self-organizing maps, to complex clustering algorithms like fuzzy clustering.
Repression of Transcriptional Activity of Estrogen Receptor α by a Cullin3/SPOP Ubiquitin E3 Ligase Complex
Byun, Boohyeong;Jung, Yunhwa 289
The role of SPOP in the ubiquitination of $ER{\alpha}$ by the Cullin3-based E3 ubiquitin ligase complex was investigated. We showed that the N-terminal region of SPOP containing the MATH domain interacts with the AF-2 domain of $ER{\alpha}$ in cultured human embryonic 293 cells. SPOP was required for coimmunoprecipitation of $ER{\alpha}$ with Cullin3. This is the first report of the essential role of SPOP in $ER{\alpha}$ ubiquitination by the Cullin3-based E3 ubiquitin ligase complex. We also demonstrated repression of the transactivation capability of $ER{\alpha}$ in cultured mammalian cells.
Arabidopsis Histidine-containing Phosphotransfer Factor 4 (AHP4) Negatively Regulates Secondary Wall Thickening of the Anther Endothecium during Flowering
Jung, Kwang Wook;Oh, Seung-Ick;Kim, Yun Young;Yoo, Kyoung Shin;Cui, Mei Hua;Shin, Jeong Sheop 294
Cytokinins are essential hormones in plant development. $\underline{A}$rabidopsis $\underline{h}$istidine-containing $\underline{p}$hosphotransfer proteins (AHPs) are mediators in a multistep phosphorelay pathway for cytokinin signaling. The exact role of AHP4 has not been elucidated. In this study, we demonstrated young flower-specific expression of AHP4, and compared AHP4-overexpressing (Ox) trangenic Arabidopsis lines and an ahp4 knock-out line. AHP4-Ox plants had reduced fertility due to a lack of secondary cell wall thickening in the anther endothecium and inhibition of IRREGURAR XYLEMs (IRXs) expression in young flowers. Conversely, ahp4 anthers had more lignified anther walls than the wild type, and increased IRXs expression. Our study indicates that AHP4 negatively regulates thickening of the secondary cell wall of the anther endothecium, and provides new insight into the role of cytokinins in formation of secondary cell walls via the action of AHP4.
Genetic Characteristics of 207 Microsatellite Markers in the Korean Population and in other Asian Populations
Choi, Su-Jin;Song, Hye-Kyung;Jeong, Jae-Hwan;Jeon, In-Ho;Yoon, Ho-Sung;Chung, Ki Wha;Won, Yong-Jin;Choi, Je-Yong;Kim, Un-Kyung 301
Microsatellites, short tandem repeats, are useful markers for genetic analysis because of their high frequency of occurrence over the genome, high information content due to variable repeat lengths, and ease of typing. To establish a panel of microsatellite markers useful for genetic studies of the Korean population, the allele frequencies and heterozygosities of 207 microsatellite markers in 119 unrelated Korean, Indian and Pakistani individuals were compared. The average heterozygosity of the Korean population was 0.71, similar to that of the Indian and Pakistani populations. More than 80% of the markers showed heterozygosity of over 0.6 and were valuable as genetic markers for genome-wide screening for disease susceptibility loci in these populations. To identify the allelic distributions of the multilocus genetic data from these microsatellite markers, the population structures were assessed by clustering. These markers supported, with the most probability, three clustering groups corresponding to the three geographical populations. When we assumed only two hypothetical clusters (K), the Korean population was separate from the others, suggesting a relatively deep divergence of the Korean population. The present 207 microsatellite markers appear to reflect the historical and geographical origins of the different populations as well as displaying a similar degree of variation to that seen in previously published genetic data. Thus, these markers will be useful as a reference for human genetic studies on Asians.
Arginine Deiminase Enhances MCF-7 Cell Radiosensitivity by Inducing Changes in the Expression of Cell Cycle-related Proteins
Park, Hwan;Lee, Jun-Beom;Shim, Young-Jun;Shin, Yong-Jae;Jeong, Seong-Yun;Oh, Junseo;Park, Gil-Hong;Lee, Kee-Ho;Min, Bon-Hong 305
After successful clinical application, arginine deiminase (ADI) has been proposed to be a new cancer therapeutic. In the present study, we examined the effect of ADI in combination with ionizing radiation (IR) on MCF-7 cell growth and clonogenic cell death. Cell growth was inhibited by IR in a dose-dependent manner and ADI enhanced the radiosensitivity. ADI itself did not suppress the growth of MCF-7 cells due to the high level of expression of argininosuccinate synthetase (ASS), which convert citrulline, a product of arginine degradation by ADI, to arginine. Previously, it was suggested that ammonia, another product of arginine degradation by ADI, is the main cause of the growth inhibition of irradiated hepatoma cells contaminated with ADI-expressing mycoplasma [van Rijn et al. (2003)]. However, we found that ammonia is not the only factor that enhances radiosensitivity, as enhancement was also observed in the absence of ammonia. In order to identify the enhancing effect, levels of ASS and proteins related to the cell cycle were examined. ASS was unchanged by ADI plus IR, but p21 (a CDK inhibitor) was upregulated and c-Myc downregulated. These findings indicate that changes in the expressions of cell cycle proteins are involved in the enhancement of radiosensitivity by ADI. We suggest that ADI is a potential adjunct to cancer therapy.
Flavanone 3β-Hydroxylases from Rice: Key Enzymes for Favonol and Anthocyanin Biosynthesis
Kim, Jeong Ho;Lee, Yoon Jung;Kim, Bong Gyu;Lim, Yoongho;Ahn, Joong-Hoon 312
Flavanone $3{\beta}$-hydroxylases (F3H) are key enzymes in the synthesis of flavonol and anthocyanin. In this study, three F3H cDNAs from Oryza sativa (OsF3H-1 ~3) were cloned by RT-PCR and expressed in E. coli as gluthatione S-transferase (GST) fusion proteins. The purified recombinant OsF3Hs used flavanone, naringenin and eriodictyol as substrates. The reaction products with naringen and eriodictyol were determined by nuclear magnetic resonance spectroscopy to be dihydrokaempferol and taxifolin, respectively. OsF3H-1 had the highest enzymatic activity whereas the overall expression of OsF3H-2 was highest in all tissues except seeds. Flavanone $3{\beta}$-hydroxylase could be a useful target for flavonoid metabolic engineering in rice.
Stage-specific Expression of Ankyrin and SOCS Box Protein-4 (Asb-4) during Spermatogenesis
Kim, Soo-Kyoung;Rhim, Si Youn;Lee, Man Ryul;Kim, Jong Soo;Kim, Hyung Jun;Lee, Dong Ryul;Kim, Kye-Seong 317
Members of the large family of Asb proteins are ubiquitously expressed in mammalian tissues; however, the roles of individual Asb and their function in the developmental testes have not been reported. In this report, we isolated a murine Asb4 from mouse testis. Northern blot analysis revealed that mAsb-4 was expressed only in testes and produced in a stage-specific manner during spermatogenesis. It was expressed in murine testes beginning in the fourth week after birth and extending into adulthood. Pachytene spermatocytes had the highest level of expression. Interestingly, the human homologue of mAsb-4, ASB-4 (hASB-4) was also expressed in human testis. These results suggest that ASB-4 plays pivotal roles in mammalian testis development and spermatogenesis. | CommonCrawl |
BMC Biotechnology
Isolation of axenic cyanobacterium and the promoting effect of associated bacterium on axenic cyanobacterium
Suqin Gao1 na1,
Yun Kong ORCID: orcid.org/0000-0003-0427-78002,3,4,5 na1,
Jing Yu1,
Lihong Miao1,
Lipeng Ji2,
Lirong Song6 &
Chi Zeng1
BMC Biotechnology volume 20, Article number: 61 (2020) Cite this article
Harmful cyanobacterial blooms have attracted wide attention all over the world as they cause water quality deterioration and ecosystem health issues. Microcystis aeruginosa associated with a large number of bacteria is one of the most common and widespread bloom-forming cyanobacteria that secret toxins. These associated bacteria are considered to benefit from organic substrates released by the cyanobacterium. In order to avoid the influence of associated heterotrophic bacteria on the target cyanobacteria for physiological and molecular studies, it is urgent to obtain an axenic M. aeruginosa culture and further investigate the specific interaction between the heterotroph and the cyanobacterium.
A traditional and reliable method based on solid-liquid alternate cultivation was carried out to purify the xenic cyanobacterium M. aeruginosa FACHB-905. On the basis of 16S rDNA gene sequences, two associated bacteria named strain B905–1 and strain B905–2, were identified as Pannonibacter sp. and Chryseobacterium sp. with a 99 and 97% similarity value, respectively. The axenic M. aeruginosa FACHB-905A (Microcystis 905A) was not able to form colonies on BG11 agar medium without the addition of strain B905–1, while it grew well in BG11 liquid medium. Although the presence of B905–1 was not indispensable for the growth of Microcystis 905A, B905–1 had a positive effect on promoting the growth of Microcystis 905A.
The associated bacteria were eliminated by solid-liquid alternate cultivation method and the axenic Microcystis 905A was successfully purified. The associated bacterium B905–1 has the potentiality to promote the growth of Microcystis 905A. Moreover, the purification technique for cyanobacteria described in this study is potentially applicable to a wider range of unicellular cyanobacteria.
The interactions between phototrophic phytoplankton and heterotrophic bacteria are considered to be an integral part of the algal/cyanobacterial life cycle. For example, diatoms and bacteria coexist in the ocean and coevolve in complex interactions that significantly modify each other's behavior and ultimately impact biogeochemical cycles [1,2,3]. This interaction plays an important role in photosynthesis and is therefore crucial for the metabolism of phototrophic phytoplankton. The relations between phototrophic phytoplankton and heterotrophic bacteria are much better understood compared with that between zooplankton and bacteria, and it is generally recognized that there are three different types of phototrophic phytoplankton and heterotrophic bacteria interactions: (i) bacteria and phototrophic phytoplankton form a mutualistic relationship in which phytoplankton benefits from bacterial products such as nutrients, whereas bacteria profit from phytoplankton products such as extracellular polymeric substances [4]; (ii) bacteria and phototrophic phytoplankton form an antagonism relationship that the growth of phytoplankton is restricted or inhibited by bacteria through algal-bacterial/cyanobacterial-bacterial contact mechanism (direct interaction) or secretion of the extracellular antialgal/anticyanobacterial substances (indirect interaction) [5, 6] and (iii) bacteria and phototrophic phytoplankton form a commensal relationship that bacteria are loosely associated with phytoplankton and may promote the growth and photosynthesis without having any negative effect, while phytoplankton grows well without the associated bacteria [7, 8]. These scenarios may be dependent on the characteristics of phototrophic phytoplankton species, associated bacterial species and secreted substances of the associated heterotrophic bacteria [4].
Harmful cyanobacterial blooms (HCBs) in lakes, reservoirs and rivers have drawn great attention all over the world as microcystin-producing cyanobacteria cause animal and human health concerns [5, 6, 9]. Microcystis aeruginosa, a unicellular, photoautotrophic and gram-negative cyanobacterium that belongs to the genus Microcystis, division Cyanophyta is one of the most common and widespread bloom-forming cyanobacteria that secret toxins [5, 6, 10]. Previous studies show that the cyanobacterium is associated with a large number of bacteria, and these associated heterotrophic bacteria (heterotrophs) are considered to benefit from organic substrates released by the cyanobacterium [11,12,13,14,15,16,17,18]. In order to avoid the influence of heterotrophs for physiological and molecular studies, the purification of the axenic cyanobacterium (bacteria-free) is especially important as well as the understanding of its responses to the heterotrophs.
Various methods including UV irradiation, sonication, micropipette technique, phenol treatment, antibiotic treatment and lysozyme treatment have been used for cyanobacteria purification [19,20,21,22,23,24,25]. Previous study shows that treatment with antibiotics is a successful strategy to obtain axenic cyanobacteria cultures [1]. Additionally, solid medium is simple and useful for the growth and isolation of axenic Microcystis strains, in which way two axenic Microcystis strains are obtained [24, 26]. Although the direct and indirect inhibiting effects of bacteria on cyanobacteria have been intensively studied [3, 5, 6, 12, 13, 27] and the associated bacteria are potentially regarded to regulate cyanobacterium growth via extracellular amino acid monomers or other substances [5, 6], the growth-promoting effects of heterotrophs on cyanobacterium have not received much attention. Apart from cyanobacterium purification, the growth-promoting effect of the heterotrophs on cyanobacterium is a significant aspect for understanding the interactions between heterotrophs and cyanobacteria. Therefore, the aim of the present study is to obtain an axenic M. aeruginosa culture and investigate the specific interaction between the heterotrophs and the cyanobacterium.
Isolation and purification of the axenic culture
M. aeruginosa 905 and 907 samples were curated by the Freshwater Algae Culture Collection of Institute of Hydrobiology (FACHB) as xenic consortia comprised of one M. aeruginosa strain and its associated heterotrophic bacteria. The colony forming process of cyanobacterium and heterotrophs on solid plates (BG11 agar medium) was observed by inverted phase contrast microscope, and the results were shown in Fig. 1. It is obvious that the heterotrophs colonies were much bigger than the cyanobacterium colonies, indicating the heterotrophs were grew much better compared with cyanobacterium. The cyanobacterium colony was formed when cultured for 15 d, although it was small; moreover, the cyanobacterial colonies were only found in 3 plates among the 20 replicate plates even after incubating for 20 d. Then the isolated cyanobacterial colonies were transferred into 6 test tubes with a Pasteur pipette under the microscope and incubated for 3 d. The result showed that 5 tubes become green, indicating the cyanobacterium grew well. With several cycles of purification, the axenic M. aeruginosa FACHB-905A (Microcystis 905A) was obtained. Possible contamination such as heterotrophs was subsequently examined before and after the incubations, and the results revealed that there was no contamination. Then a molecular identification was carried out for the purified axenic cyanobacterium named as Microcystis 905A. The results indicated that Microcystis 905A presented the highest sequence similarity (99% of identity) with M. aeruginosa NIES-843, M. aeruginosa PCC 7820 and M. aeruginosa PCC 7806.
The growth of cyanobacterial and heterotrophic colonies (× 100). (a, b and c was the colonial morphology cultured for 1, 8 and 15 d, respectively)
Identification of associated bacteria
Two gram-negative bacteria, named B905–1 and B905–2, were isolated from the xenic M. aeruginosa FACHB-905 (Microcystis 905). To identify the bacteria, phylogenetic analyses were performed using the 16S rDNA sequences. A total of 1367 bp of each of the two isolated strains was determined, and the 16S rDNA gene sequences obtained were subjected to GenBank BLAST search analyses [28]. Strain B905–1 was most closely related to Pannonibacter phragmitetus L-s-R2A-19.4 with a 99% similarity value, and strain B905–2 was most closely related to Chryseobacterium sp. with a 97% similarity value. With the same method, the xenic M. aeruginosa FACHB-907 (Microcystis 907) was diluted and then plated on the BG11 solid medium. After culturing under the culture conditions (Section "Culture of cyanobacteria and heterotrophs") for 15 ~ 20 d, the single cyanobacterial colonies were transferred into test tubes with a Pasteur pipette under the microscope and incubated for 3 d. The results showed that 5 tubes became green, indicating the cyanobacterium grew well. With several cycles of purification, the axenic Microcystis 907A and another heterotroph B907–1 were also successfully isolated, and B907 was identified as Agrobacterium sp., which was most closely related to Agrobacterium sp. PNS-1 and Agrobacterium albertimagni C0008 with a 98% similarity value. The sequences of B905–1 and B907–1 were imported into the DNAMAN software V6 and aligned [29]. Phylogenetic tree was then constructed (Fig. 2) and it was further confirmed that strain B905–1 and B907–1 were closely related to Pannonibacter sp. and Agrobacterium sp., respectively.
The phylogenetic tree of heterotrophs
Effect of associated bacteria on Microcystis 905A
The growth rates of xenic culture (Microcystis 905) and axenic culture (Microcystis 905A) were measured under both the static cultivation (without the shaking speed) and the shaking cultivation conditions (with the shaking speed of 150 rpm). Figure 3 indicated that the generation time of axenic culture was 42.3 h (shaking cultivation) and 60.9 h (static cultivation), while the generation time of xenic culture was 33.6 h under the shaking cultivation and 45.3 h under the static cultivation, respectively. In addition, the generation time of xenic culture was much shorter than that of the axenic culture under the same cultivation condition, which demonstrated the photosynthetic efficiency of Microcystis 905 was much better. At the same time, the growth rates of both the xenic culture and axenic culture under the shaking cultivation condition were much faster than that under the static cultivation condition. These results pointed to the role of the heterotrophs in promoting the growth of Microcystis 905A.
The growth curves of axenic Microcystis 905A and xenic Microcystis 905
Effect of heterotroph-cyanobacterium ratio on Microcystis 905A
To further study the effect of heterotroph B905–1 on the growth of axenic Microcystis 905A, a series of experiments that different initial cyanobacterial cell concentrations with the heterotroph-cyanobacterium ratio of 1:1, 1:10 and 1:100 were undertaken in BG11 liquid medium (Fig. 4). Compared with the control group (CK), the cyanobacterial cell numbers of 1:1 treatment group was slightly suppressed during the 21 d, while the 1:10 and 1:100 treatment groups showed a remarkable increase, and they were increased with the extension of culture time. In addition, the highest cyanobacterial cell number for the treatment group of 1:10 and 1:100 was (14.72 ± 0.48) × 106 cell mL− 1 and (10.63 ± 0.37) × 106 cell mL− 1, respectively (Fig. 4d), and both of them were obtained at the 21st day. Obviously, the cyanobacterial cell numbers for the 1:10 treatment group were much higher than that for the 1:100 treatment group under the same conditions, and the reason might due to the higher concentration of the strain B905–1 that added at the beginning of the experiment. These results indicated the addition of heterotroph B905–1 had a positive promoting effect on the growth of Microcystis 905A.
Effects of heterotroph-cyanobacterium ratio on axenic Microcystis 905A (a, b, c and d showed the initial cyanobacterial cell number of 3.0 × 102, 3.0 × 103, 3.0 × 104 and 3.0 × 105 cell mL− 1, respectively). * and ** represented a statistically significant difference of p < 0.05 and p < 0.01 when compared to the control
The growth of axenic Microcystis 905A on BG11 agar medium with and without the addition of strain B905–1 was also investigated. For the treatments that with the addition of strain B905–1, the cyanobacterial colony of axenic Microcystis 905A became green after incubating for 20 days; while for the treatments that without the addition of strain B905–1, there was no cyanobacterial colony on BG11 agar medium. Moreover, the effects of different heterotroph-cyanobacterium ratio (1:1, 1:10 and 1:100) on the growth of cyanobacterium on BG11 agar medium were studied (Fig. 5). Interestingly, the Microcystis 905A was unable to grow in the treatment of 1:1 (Fig. 5b), but it grew well in both treatments of 1:10 and 1:100 (Fig. 5c and d). The results indicated high ratio of heterotroph-cyanobacterium (1:1) was not good for the growth of Microcystis 905A, which meant when the initial concentrations of B905–1 and axenic M. aeruginosa were the same, the growth of M. aeruginosa on both BG11 liquid medium and BG11 agar medium were inhibited. Previous study showed BG11 could become carbon- or phosphate-limited in dense cultures for some cyanobacteria [30]. The growth of Microcystis 905A was best in the 1:10 condition indicated the C-P ratios influenced by B905–1 were best balanced, where the heterotroph produced an enhancing amount of CO2, but didn't consume too much phosphate in competition with Microcystis 905A.
Effects of strain B905–1 on axenic Microcystis 905A cultured by plate. (a was the control without the addition of strain B905–1; b, c and d was the treatment with the addition of strain B905–1 at an initial cell number of 1.0 × 104, 1.0 × 103 and 1.0 × 102 cell mL− 1, respectively)
In order to prove that the promoting effect was associated with the extracellular substances of strain B905–1, the effect of the cell-free filtrate of strain B905–1 on the growth of Microcystis 905A was carried out (Fig. 6). Results showed that the cyanobacterium cell number of the treatment with the addition of the cell-free filtrate was 9.23 ± 0.56, 11.31 ± 1.85 and 22.14 ± 1.06 cell mL− 1 after incubating for 4 d, 8 d and 12 d, respectively, and it was obviously higher than that with no cell-free filtrate. The axenic cyanobacterium grew much better with the addition of the cell-free filtrate again demonstrating strain B905–1 had the promoting effect on the growth of Microcystis 905A; moreover, the released substances of strain B905–1 with the promoting effect were apparently existed in the cell-free filtrate.
Effect of cell-free filtrate of strain B905–1 on axenic Microcystis 905A. * and ** represented a statistically significant difference of p < 0.05 and p < 0.01 as compared with the control
HCBs occur around the world and are responsible for most aquatic environment pollution [9, 10]. Researches of HCBs have been concentrated on the physical, chemical and bio-ecological methods for the control of cyanobacteria and the removal of nitrogen and phosphorous [5, 9]. Little is known about the microbial community of cyanobacteria with heterotrophs and the interactions between them [2, 3]. Previous studies demonstrated that the oxic cyanobacterial layer of eutrophic water was mainly composed by cyanobacteria and aerobic heterotrophic microorganisms, and the relationships between them were complicated [31, 32]. Therefore, it is necessary to obtain the axenic M. aeruginosa from the complex microbial community and further research the interactions between cyanobacteria and heterotrophs.
Traditionally, cyanobacterial purification methods including antibiotic treatment and lysozyme treatment had been applied for eliminating heterotrophs from cyanobacteria and algae [20, 21, 25, 33], and the purification effects were depended on the concentrations and types of antibiotic or lysozyme [22,23,24,25]. With a series of antibiotic and lysozyme procedures, the axenic cyanobacteria such as Anabaena flos-aquae, Aphanothece nidulans, Arthrospira platensis and Arthrospira spp. were obtained [22, 23]. While the sensitivities of xenic cyanobacterium Microcystis 905 to five antibiotics employed in the present study are quite different, in particular, four of the tested antibiotics have the inhibition effects on cyanobacterium growth. Furthermore, the lysozyme can inhibit both the cyanobacterium and heterotroph simultaneously (Supporting Information of Table S1 and Fig. S1), it is quite difficult to eliminate the heterotrophs from xenic Microcystis 905 culture by antibiotics treatment or lysozyme treatment methods. Researches indicate the bloom forming cyanobacteria in freshwater or seawater are more often occurred in nutrient-rich environments, and the cyanobacteria are surrounded by diverse communities of heterotrophic bacteria [31, 32, 34]. The difficulty in obtaining the axenic Microcystis 905 is probably due to the lack knowledge of heterotrophs in xenic culture.
Heterotrophs can colonize within the enclosed region or directly adhere to the surface of a cyanobacterium colony [34]. By transferring and culturing xenic culture of Arthrospira platensis in fresh sterile medium, the axenic A. platensis is obtained by the technique of single-trichome manipulation performed with a microtrowel [35]. Considering the xenic Microcystis 905 can easily form single cyanobacterial colony on BG11 agar plate and the growth rates of Microcystis and heterotrophs are significantly different, heterotrophs are removed by solid-liquid alternate cultivation method and micropipette technique, which by picking and transferring the single cyanobacterial colony to BG11 liquid medium under the microscope. This method not only guarantees the minimum initial growth density of cyanobacterial cells, but also ensures the purity of cyanobacterial cells, thus results in the successful separation of the axenic Microcystis 905A. It is also successfully applied to purify other strain such as axenic Microcystis 907A. In spite of the traditional standard plate method based on solid-liquid alternate cultivation for obtaining axenic culture is time-consuming, the protocol that we have developed for purifying axenic Microcystis 905A culture maybe suitable for separating axenic strains from a commensal, and potentially syntrophic, symbiosis. These results indicate that this technique is at least applicable to unicellular cyanobacteria.
Molecular biological techniques such as denaturing gradient gel electrophoresis (DGGE) and fluorescence in situ hybridization have been used to investigate the purity of cyanobacterial culture [17, 30]. DGGE results suggest that a number of bacteria including α-proteobacteria, β-proteobacteria, γ-proteobacteria, Bacteroidetes and Actinobacteria have been detected in the cyanobacterial cultures, and the Sphingomonadales are the prevalent group among the Microcystis-associated bacteria [17]; in another study, the heterotrophs, for instance, Aeromicrobium alkaliterrae, Halomonas desiderata and Staphylococcus saprophyticus are also identified from the Arthrospira platensis culture [25]. The heterotrophic bacteria, such as α-proteobacteria and bacteria from the Bacteroidetes-group, are reported to associate with Diatoms in nature as well as in stock cultures [1]. We observe that the heterotrophs strain B905–1 and B905–2 are closely related to Pannonibacter sp. and Chryseobacterium sp., respectively. Besides the identification of heterotrophs, it seems that more attention should be paid to the interactions between heterotrophs and the cyanobacterium M. aeruginosa. It is suggested that the interaction between heterotrophs and cyanobacterium is symbiosis or parasitic [3, 36], and the heterotrophs are difficult to isolate from cyanobacterium during the formation of cyanobacterial or algal colony [1, 37].
Heterotrophs can enhance or suppress the growth of cyanobacteria, or even kill them [31, 34]. To better understand the general interaction between heterotroph and cyanobacterium, the effect of the strain B905–1 on the cyanobacterium M. aeruginosa FACHB-905A is studied. It is showed that the growth rate of the xenic Microcystis 905 is much faster than that of the axenic xenic Microcystis 905A under both static cultivation and shaking cultivation conditions. The results indicate that the heterotroph B905–1 has a promoting effect on the growth of axenic Microcystis 905A. In consideration of the initial cell number of Microcystis 905 is (2.2 ± 0.2) × 106 cell mL− 1 and heterotroph B905–1 is (0.64 ± 0.07) × 106 cell mL− 1, it is not surprising that the growth-promoting effect of the 1:10 treatment is much better than the 1:100 treatment. Interestingly, the Microcystis 905A is unable to form colonies in the 1:1 treatment group on BG11 agar medium. Although M. aeruginosa is a kind of photosynthetic bacterium (or autotrophic bacteria) and it grows well under the light with inorganic nutrients, which are supplied by BG11 liquid medium, it is not surprising that axenic Microcystis 905A could not divide at the heterotroph-cyanobacterium ratio of 1:1, as the heterotroph B905–1 can effectively compete nutrients with axenic Microcystis 905A.
The growth-promoting effect of heterotrophs on algae has recently been observed in other studies, for example, the growth of toxic dinoflagellate Alexandrium fundyense is promoted substantially by Alteromonas sp. [8], and the attached bacteria provide co-existing for diatom Thalassiosira weissflogii to form transparent exopolymer particles [4]. Interpretation of such phenomenon might be explained by the symbiotic interaction that the bacteria deliver vitamins for algae [38], or the addition of bacteria changes the available nutrient concentration such as extracellular organic carbon or dissolved organic matter [2, 4, 14, 17, 31]. In a previous study, the growth rate and metabolic products of Shewanella putrfaciens, Brochothrix thermosphacta and Pseudomonas sp. show a remarkable increase no matter cultured individually or in all possible combinations compared to the control cultures [39]. Difference from the above-mentioned microorganisms, axenic diatoms are unable to form biofilm when purified from bacteria [4]. Although the axenic Microcystis 905A grows well under the liquid culture condition, it could not form cyanobacterial colonies on the BG11 agar plate without the addition of strain B905–1, indicating the presence of heterotroph B905–1 is indispensable for the growth of axenic Microcystis 905A on BG11 agar plate. The different growth phenomenon of Microcystis 905A in solid and liquid BG11 medium is mainly attributed to the phosphate. It is reported that reactive oxygen species (ROS) were produced when phosphate was autoclaved together with agar, and total colony counts of Gemmatimonas aurantiaca in liquid medium (without agar) were remarkably higher than those grown on solid medium (with agar) [40]. In the same way, there may be some ROS produced in BG11 solid medium and the ROS is likely a contributing factor to the growth inhibition of Microcystis 905A. It is speculated that the heterotrophic bacterium B905–1 closely associated with cyanobacterium likely consume nutrients that released by Microcystis 905, and may also produce vitamins and other beneficial metabolites useful for cyanobacterial growth [32, 34]. Nevertheless, the presence of strain B905–1 for the cyanobacterial colony formation mechanism needs to be further studied.
Previous study also indicates that the enhancement growth of axenic Microcoleus chthonoplastes PCC 7420 is upon the addition of a filtrate obtained from the closely related xenic culture of Microcoleus sp. M2C3, and the stimulated effect could be due to the release of certain growth factors and vitamins by associated aerobic heterotrophic microorganisms [31]. Most of the strains are able to secrete active substance to inhibit or enhance the growth of cyanobacteria [41]. Possible mechanisms may include various types of interactions from nutrient cycling to the production of growth-inhibiting and cell-lysing compounds [42]. Our results demonstrate that strain B905–1 has the potential to promote Microcystis 905A growth, whereas Microcystis 905A provides organic matter for associated bacterial proliferation. In a comparable study it is pointed out that bacteria have the potential to control diatom growth, and their interactions are regulated by multiple signals involving common biomolecules such as proteins, polysaccharides and respective monomers [14]. In accordance with previous observations, we also find the associated bacterium has promoting effect on the growth of cyanobacterium M. aeruginosa. Increasing knowledge on molecular mechanisms of microbial interactions are crucial to better understand or predict nutrient and organic matter cycling in aquatic environment, and also to better understand the role of such associated bacterium for the formation mechanism of HCBs and eutrophication control.
Up to now, most studies on the interaction between heterotrophs and cyanobacteria are performed in pure cultures [32, 34, 41], and the growth of the axenic cyanobacteria is almost promoted by the heterotrophs [8, 32, 34]. However, the interaction can be profoundly different in nature, as most microbes are not axenic but grow together in communities. The complex communities or microbial networks often result in surprisingly coordinated multicellular behaviour, e.g. dinoflagellates can feed on associated bacteria and heterotrophs also attack and lysis the cyanobacteria [31]. Furthermore, the heterotrophs are considered as playing a significant role in carbon cycling and cyanobacterial photosynthesis [31]. All these studies suggest that the relationship between heterotrophs and cyanobacteria in nature is complex and manifold, further analysis is needed to have a full understanding of the microbial communities surrounding cyanobacteria.
Our results showed that heterotrophs were eliminated by solid-liquid alternate cultivation method and the axenic Microcystis 905A was successful purified by means of picking and transferring the single cyanobacterial colony to BG11 liquid medium under the microscope; moreover, two heterotrophs, strain B905–1 and strain B905–2, were identified as Pannonibacter sp. and Chryseobacterium sp. with a 99 and 97% similarity value in the basis of 16S rDNA gene sequences. Further, strain B905–1 had the potentiality to promote the growth of Microcystis 905A. The purification technique for cyanobacteria described in this study is potentially applicable to a wider range of unicellular cyanobacteria.
Culture of cyanobacteria and heterotrophs
Xenic Microcystis 905 and Microcystis 907 used in this study were purchased from the FACHB, Chinese Academy of Sciences (Wuhan, China). Sterilized BG11 liquid medium or BG11 agar medium (with the agar concentration of 1.5%) was used as the main culture medium for both axenic and xenic M. aeruginosa [5, 6, 43]. Before being used as inoculants, cyanobacteria were cultured with 200 mL BG11 liquid medium in 500 mL Erlenmeyer flasks for 7 days to reach the log phase, and the culture conditions were as follows: 2000 lx white light, light: dark = 14 h: 10 h; 25 ± 1 °C [5, 6]. Axenic Microcystis 905A was obtained by treating with micro-picking from Microcystis 905 culture.
Bacterial strains B905–1 and B905–2 were isolated from the culture solution of the cyanobacterium Microcystis 905. These two bacteria were routinely grown in TY liquid medium [44] at 28 ± 1 °C under aerobic conditions (with the shaking speed of 150 rpm). The cell-free filtrate of strain B905–1 was obtained by centrifuging the fermentation broth at 10,000×g for 10 min and then filtered through with the 0.22 μm cellulose acetate membrane [5]. Stock cultures were kept at 4 °C, and working cultures were obtained from stock cultures through two transfers in appropriate TY liquid medium.
Isolation and purification of axenic culture
For the isolation and purification of axenic cultures, cyanobacterial cells were treated by the solid-liquid alternate cultivation method. The xenic cyanobacterium was diluted to different multiple from 10− 1 to 10− 8, and the different multiple were inoculated onto sterile Petri dishes containing BG11 agar medium, respectively [45]. After incubating for 15 to 20 d under the culture conditions above, a single cyanobacterial colony was picked by a Pasteur pipette with the aid of a microscope, and then transferred into a test tube that containing 5 mL BG11 liquid medium. The purification result was checked as the test tube becoming green, and the testing method was as follows: 0.1 mL cyanobacterial culture from the test tube was spread on the Luria-Bertani (LB) agar plate [44, 46] and incubated at room temperature for 3 d or more to examine the existence of heterotrophs, the absence of heterotrophs indicated this cyanobacterial culture was axenic. After the purification, the axenic cyanobacterial colony was picked up by a Pasteur pipette, then transferred to Erlenmeyer flasks with BG11 liquid medium, and incubated at 25 ± 1 °C in a 14 L/10D light-dark cycle. The purification procedure for axenic Microcystis was illustrated in Fig. 7.
Purification procedure for axenic culture of Microcystis
Cyanobacterial inhibition bioassay
The growth curve of axenic Microcystis 905A and xenic Microcystis 905 were carried out at an initial cyanobacterial cell number of 1.0 × 106 cell mL− 1. Effects of heterotroph-cyanobacterium ratio on the growth of four kinds of different initial axenic cyanobacterial cell concentrations in BG11 liquid medium were performed as follows: the axenic Microcystis 905A was firstly added in 250 mL sterilized Erlenmeyer flasks containing 100 mL BG11 liquid medium to keep the cyanobacterial cell number of 3.0 × 102, 3.0 × 103, 3.0 × 104 and 3.0 × 105 cell mL− 1, respectively, and then strain B905–1 (initial cell number was 2.73 × 107 cell mL− 1) was added according to heterotroph-cyanobacterium ratio of 1:1, 1:10 and 1:100, the controls (CK) were without the addition of strain B905–1. For the effects of heterotroph-cyanobacterium ratio on the growth of axenic Microcystis 905A on BG11 agar medium, the heterotroph (B905–1) and cyanobacterium (axenic M. aeruginosa) were mixed well in BG11 liquid medium, and the final heterotroph-cyanobacterium ratios of 1:1, 1:10 and 1:100 were performed by adding different amounts of bacterium into the 100 mL axenic M. aeruginosa culture with the initial cyanobacterial cell number of 1.0 × 104 cell mL− 1. The mixed suspensions were diluted to different multiples and then plated on the BG11 agar medium, each dilution gradient was repeated for three times.
The effect of cell-free filtrate of strain B905–1 on axenic Microcystis 905A was carried out by adding the cell-free filtrate (2%, v/v) into a 100 mL sterilized Erlenmeyer flask which containing initial axenic cyanobacterial cell number of 1.0 × 106 cell mL− 1. The cell-free filtrate was obtained by filtrating with the 0.22 μm cellulose acetate membrane. The negative control was made by adding the same amount of TY liquid medium into 100 mL cyanobacterial culture or BG11 agar plate.
All the experiments were performed under aseptic conditions, the controls (CK) and the treatments were replicated three times, and the arithmetical means (± SD) were used as the final results.
DNA extraction, sequencing and phylogenetic analysis
The isolated bacterial strains were identified based on 16S rRNA gene sequence analysis. Heterotrophs were prepared by incubating the seed culture at 37 °C with a shaking speed of 180 rpm for 20 h in sterilized LB liquid medium. The heterotroph cells were collected by centrifugation at 4000 rpm for 10 min (at 4 °C). DNA was extracted from the bacterial sample using the 3S DNA Isolation Kit V2.2 (Biocolor BioScience & Technology Co., Shanghai, China). Fragments of the 16S rDNA were amplified by PCR using the primers 27F (5′-GAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-ACGGCTACCTTGTTACGACTT-3′), and the amplified fragments were sequenced by AuGCT Biotech Co., Ltd. (Beijing, China) [17]. The BLAST procedure was used to search for sequence similarity in GenBank [28].
Bacteria cell density is determined by colony counting method. Samples are cultured on TY agar medium at 28 ± 1 °C for 48 h, and the colonies are counted. The cyanobacterium cell number is determined by hemocytometer using light microscopy (NIKON-YS100). The cell density or cell number of each sample is counted in triplicate, and standard error of the mean is calculated for all data. Statistical analysis is performed using Version 17.0 of SPSS for Windows (SPSS, Chicago, IL, USA) [6].
The generation time (G) of the cyanobacterium is calculated according to eq. (1):
$$ \mathrm{G}=\left({\mathrm{t}}_2-{\mathrm{t}}_1\right)/\left[3.322\left({\mathrm{lgX}}_2-{\mathrm{lgX}}_1\right)\right] $$
where X1 and X2 are the cyanobacterium cell number at time t1 and t2, respectively.
The inhibition efficiency is calculated according to eq. (2):
$$ \mathrm{Inhibition}\ \mathrm{efficiency}=\left(1-{\mathrm{C}}_{\mathrm{t}}/{\mathrm{C}}_0\right)\times 100\% $$
where C0 and Ct are the cyanobacterium cell number of the control and test group at time t, respectively [5, 6].
The data are presented within the manuscript and the cyanobacteria such as M. aeruginosa FACHB-905 and FACHB-907 used in this study could be purchased from the Freshwater Algae Culture Collection of Institute of Hydrobiology (FACHB), Chinese Academy of Sciences (Wuhan, China).
HCBs:
Harmful cyanobacterial blooms
Microcystis 905A:
The axenic M. aeruginosa FACHB-905A
Microcystis 905:
The xenic M. aeruginosa FACHB-905
FACHB:
Freshwater Algae Culture Collection of Institute of Hydrobiology
DGGE:
Denaturing gradient gel electrophoresis
Bruckner CG, Kroth PG. Protocols for the removal of bacteria from freshwater benthic diatom cultures. J Phycol. 2009;45:981–6.
CAS PubMed Article PubMed Central Google Scholar
Bruckner CG, Rehm C, Grossart HP, Kroth PG. Growth and release of extracellular organic compounds by benthic diatoms depend on interactions with bacteria. Environ Microbiol. 2011;13:1052–63.
Amin SA, Parker MS, Armbrust EV. Interactions between diatoms and Bacteria. Microbiol Mol Biol Rev. 2012;76:667–84.
Gardes A, Iversen MH, Grossart HP, Passow U, Ullrich MS. Diatom-associated bacteria are required for aggregation of Thalassiosira weissflogii. ISME J. 2011;5:436–45.
PubMed Article CAS PubMed Central Google Scholar
Kong Y, Zou P, Yang Q, Xu XY, Miao LH, Zhu L. Physiological responses of Microcystis aeruginosa under the stress of antialgal actinomycetes. J Hazard Mater. 2013;262:274–80.
Kong Y, Xu XY, Zhu L. Cyanobactericidal effect of Streptomyces sp. HJC-D1 on Microcystis auruginosa. PLoS One. 2013;8:e57654.
Ahmad F, Ahmad I, Khan MS. Screening of free-living rhizospheric bacteria for their multiple plant growth promoting activities. Microbiol Res. 2008;163:173–81.
Ferrier M, Martin JL, Rooney-Varga JN. Stimulation of Alexandrium fundyense growth by bacterial assemblages from the bay of Fundy. J Appl Microbiol. 2002;92:706–16.
Chen W, Song LR, Peng L, Wan N, Zhang XM, Gan NQ. Reduction in microcystin concentrations in large and shallow lakes: water and sediment-interface contributions. Water Res. 2008;42:763–73.
Gan NQ, Xiao Y, Zhu L, Wu ZX, Liu J, Hu CL, et al. The role of microcystins in maintaining colonies of bloom-forming Microcystis spp. Environ Microbiol. 2012;14:730–42.
Alex A, Vasconcelos V, Tamagnini P, Santos A, Antunes A. Unusual symbiotic cyanobacteria association in the genetically diverse intertidal marine sponge Hymeniacidon perlevis (Demospongiae, Halichondrida). PLoS One. 2012;7:e51834.
Brunberg AK. Contribution of bacteria in the mucilage of Microcystis spp. (cyanobacteria) to benthic and pelagic bacterial production in a hypereutrophic lake. FEMS Microbiol Ecol. 1999;29:13–22.
Erwin PM, Olson JB, Thacker RW. Phylogenetic diversity, host-specificity and community profiling of sponge-associated bacteria in the northern Gulf of Mexico. PLoS One. 2011;6:e26806.
Grossart HP, Czub G, Simon M. Algae-bacteria interactions and their effects on aggregation and organic matter flux in the sea. Environ Microbiol. 2006;8:1074–84.
PubMed Article PubMed Central Google Scholar
Grossart HP, Kiorboe T, Tang KW, Allgaier M, Yam EM, Ploug H. Interactions between marine snow and heterotrophic bacteria: aggregate formation and microbial dynamics. Aquat Microb Ecol. 2006;42:19–26.
Paerl HW, Bebout BM, Prufert LE. Bacterial association with marine Oscillatoria sp. (Trichodesmium sp.) populations: ecophysiological implications. J Phycol. 1989;25:773–84.
Shi LM, Cai YF, Yang HL, Xing P, Li PF, Kong LD, et al. Phylogenetic diversity and specificity of bacteria associated with Microcystis aeruginosa and other cyanobacteria. J Environ Sci China. 2009;21:1581–90.
Worm J, Sondergaard M. Dynamics of heterotrophic bacteria attached to Microcystis spp. (cyanobacteria). Aquat Microb Ecol. 1998;14:19–28.
Ferris MJ, Hirsch CF. Method for isolation and purificarion of cyanobacteria. Appl Environ Microbiol. 1991;57:1448–52.
Han AW, Oh KH, Jheong WH, Cho YC. Establishment of an axenic culture of microcystin-producing Microcystis aeruginosa isolated from a Korean reservoir. J Microbiol Biotechnol. 2010;20:1152–5.
Katoh H, Furukawa J, Tomita-Yokotani K, Nishi Y. Isolation and purification of an axenic diazotrophic drought-tolerant cyanobacterium, Nostoc commune, from natural cyanobacterial crusts and its utilization for field research on soils polluted with radioisotopes. BBA Bioenerg. 1817;2012:1499–505.
Kim JS, Park YH, Yoon BD, Oh HM. Establishment of axenic cultures of Anabaena flos-aquae and Aphanothece nidulans (cyanobacteria) by lysozyme treatment. J Phycol. 1999;35:865–9.
Sena L, Rojas D, Montiel E, Gonzalez H, Moret J, Naranjo L. A strategy to obtain axenic cultures of Arthrospira spp. cyanobacteria. World J Microbiol Biotechnol. 2011;27:1045–53.
Shirai M, Matumaru K, Ohotake A, Takamura Y, Aida T, Nakano M. Development of a solid medium for growth and isolation of axenic microcystin strain (cyanobacteria). Appl Environ Microbiol. 1989;55:2569–71.
Choi GG, Bae MS, Ahn CY, Oh HM. Induction of axenic culture of Arthrospira (Spirulina) platensis based on antibiotic sensitivity of contaminating bacteria. Biotechnol Lett. 2008;30:87–92.
Shirai M, Ohtake A, Sano T, Matsumoto S, Sakamoto T, Sato A, et al. Toxicity and toxins of natural blooms and isolated strains of Microcystis spp. (cyanobacteria) and improved procedure for purification of cultures. Appl Environ Microbiol. 1991;57:1241–5.
Casamatta DA, Wickstrom CE. Sensitivity of two disjunct bacterioplankton communities to exudates from the cyanobacterium Microcystis aeruginosa Kutzing. Microb Ecol. 2000;40:64–73.
Xu XT, Dimitrov D, Rahbek C, Wang ZH. NCBI miner: sequences harvest from Genbank. Ecography. 2015;38:426–30.
Peng ZY, Li L, Yang LQ, Zhang B, Chen G, Bi YP. Overexpression of peanut diacylglycerol acyltransferase 2 in Escherichia coli. PLoS One. 2013;8(4):e61363.
Kim HW, Vannela R, Zhou C, Rittmann BE. Nutrient acquisition and limitation for the photoautotrophic growth of Synechocystis sp. PCC6803 as a renewable biomass source. Biotechnol Bioeng. 2011;108(2):277–85.
Abed RMM, Kohls K, Leloup J, de Beer D. Abundance and diversity of aerobic heterotrophic microorganisms and their interaction with cyanobacteria in the oxic layer of an intertidal hypersaline cyanobacterial mat. FEMS Microbiol Ecol. 2018;94(2):1–12.
Cummings SL, Barbé D, Leao TF, Korobeynikov A, Engene N, Glukhov E, et al. A novel uncultured heterotrophic bacterial associate of the cyanobacterium Moorea producens JHB. BMC Microbiol. 2016;16:198.
PubMed PubMed Central Article CAS Google Scholar
Bolch CJS, Blackburn SI. Isolation and purification of Australian isolates of the toxic cyanobacterium Microcystis aeruginosa Kutz. J Appl Phycol. 1996;8:5–13.
Kim M, Shin B, Lee J, Park HY, Park W. Culture-independent and culture-dependent analyses of the bacterial community in the phycosphere of cyanobloom-forming Microcystis aeruginosa. Sci Rep. 2019;9(1):20416.
Shiraishi H. Association of heterotrophic bacteria with aggregated Arthrospira platensis exopolysaccharides: implications in the induction of axenic cultures. Biosci Biotechnol Biochem. 2015;79(2):331–41.
Jasti S, Sieracki ME, Poulton NJ, Giewat MW, Rooney-Varga JN. Phylogenetic diversity and specificity of bacteria closely associated with Alexandrium spp. and other phytoplankton. Appl Environ Microbiol. 2005;71:3483–94.
Castenholz RW. Culturing methods for cyanobacteria. Method Enzymol. 1998;167:68–93.
Croft MT, Lawrence AD, Raux-Deery E, Warren MJ, Smith AG. Algae acquire vitamin B-12 through a symbiotic relationship with bacteria. Nature. 2005;438:90–3.
Tsigarida E, Boziaris IS, Nychas GJE. Bacterial synergism or antagonism in a gel cassette system. Appl Environ Microbiol. 2003;69:7204–9.
Tanaka T, Kawasaki K, Daimon S, Kitagawa W, Yamamoto K, Tamaki H, et al. A hidden pitfall in the preparation of agar media undermines microorganism cultivability. Appl Environ Microbiol. 2014;80(24):7659–66.
Zhou Y, Eustance E, Straka L, Lai YJS, Xia SQ, Rittmann BE. Quantification of heterotrophic bacteria during the growth of Synechocystis sp. PCC 6803 using fluorescence activated cell sorting and microscopy. Algal Res. 2018;30:94–100.
Berg KA, Lyra C, Sivonen K, Paulin L, Suomalainen S, Tuomi P, et al. High diversity of cultivable heterotrophic bacteria in association with cyanobacterial water blooms. ISME J. 2009;3(3):314–25.
Kong Y, Xu XY, Zhu L, Miao LH. Control of the harmful alga Microcystis aeruginosa and absorption of nitrogen and phosphorus by Candida utilis. Appl Biochem Biotechnol. 2013;169:88–99.
Julkowska D, Obuchowski M, Holland IB, Seror SJ. Comparative analysis of the development of swarming communities of Bacillus subtilis 168 and a natural wild type: critical effects of surfactin and the composition of the medium. J Bacteriol. 2005;187:65–76.
Taniuchi Y, Chen YLL, Chen HY, Tsai ML, Ohki K. Isolation and characterization of the unicellular diazotrophic cyanobacterium group C TW3 from the tropical western Pacific Ocean. Environ Microbiol. 2012;14:641–54.
Sezonov G, Joseleau-Petit D, D'Ari R. Escherichia coli physiology in Luria-Bertani broth. J Bacteriol. 2007;189:8746–9.
We would like to express deep thanks to the Editors and the anonymous reviewers for their helpful comments on the manuscript.
This study was financially supported by a grant from National High Technology Research and Development Program of China (863 Program) (No.2013AA102805–04), the Key Laboratory of Water Pollution Control and Environmental Safety of Zhejiang Province (No. 2018ZJSHKF06), the Key Project of Jingzhou Science and Technology (No. 2019EC61–15), China Postdoctoral Science Foundation funded project (No. 2016 M591832), the Natural Science Foundation of Jiangsu Province (No. BK20150165) and the Science and Technology Program of Administration of Quality and Technology Supervision of Jiangsu Province (No. KJ15ZB01).
Suqin Gao and Yun Kong contributed equally to this work.
School of Biology and Pharmaceutical Engineering, Wuhan Polytechnic University, Wuhan, 430023, Hubei, China
Suqin Gao, Jing Yu, Lihong Miao & Chi Zeng
College of Resources and Environment, Yangtze University, Wuhan, 430100, Hubei, China
Yun Kong & Lipeng Ji
Key Laboratory of Water Pollution Control and Environmental Safety of Zhejiang Province, Hangzhou, 310058, Zhejiang, China
Yun Kong
Yixing Academy of Environmental Protection, Nanjing University, Yixing, 214200, Jiangsu, China
Yixing Urban Supervision & Inspection Administration of Product Quality, National Supervision & Inspection Center of Environmental Protection Equipment Quality (Jiangsu), Yixing, 214205, Jiangsu, China
Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan, 430072, Hubei, China
Lirong Song
Suqin Gao
Jing Yu
Lihong Miao
Lipeng Ji
Chi Zeng
YK and LM conceived and designed the project. SG, JY, LJ and CZ performed the experiments. YK, LM and LS analyzed the data. LM, CZ and LS contributed reagents/materials/analysis tools. YK, SG and LM wrote the paper. All authors have read and approved the manuscript.
Correspondence to Lihong Miao.
This manuscript doesn't involve any human participants, human data, human tissue, individual person's data or animal experiment.
Effects of antibiotics on heterotrophs and cyanobacterium. Figure S1. Effects of lysozyme on heterotrophs and Microcystis 905.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Gao, S., Kong, Y., Yu, J. et al. Isolation of axenic cyanobacterium and the promoting effect of associated bacterium on axenic cyanobacterium. BMC Biotechnol 20, 61 (2020). https://doi.org/10.1186/s12896-020-00656-5
Microcystis aeruginosa
Bacterial symbioses
Heterotrophic bacteria
Promoting effect
Microbial biotechnology
Submission enquiries: [email protected] | CommonCrawl |
Time-Resolved Planar Particle Image Velocimetry of the 3-D Multi-Mode Richtmyer Meshkov Instability
Sewell, Everest George
PIV
Richtmyer-Meshkov
Shock Tube
Jacobs, Jeffrey W.
An experimental investigation of the Richtmyer-Meshkov instability (RMI) is carried out using a single driver vertical shock tube. A diffuse, stably stratified membrane-less interface is formed between air and sulfur hexafluoride (SF6) gases (Atwood number, $ A = \frac{\rho_1 - \rho_2}{\rho_1+\rho_2} \approx0.67$) via counterflow, where the light gas (air) enters the tube from the top of the driven section, and the heavy gas (SF$_6$) enters from the bottom of the test section. A perturbation is imposed at the interface using voice coil drivers that cause a vertical oscillation of the column of gases. This oscillation results in the Rayleigh-Taylor unstable growth of random modes present at the interface, and gives rise to Faraday waves which invert with half the frequency of the oscillation. The interface is initially accelerated by a Mach 1.17 (in air) shock wave, and the development of the ensuing mixing layer is investigated. The shock wave is then reflected from the bottom of the apparatus, where it interacts with the mixing layer a second time (reshock). The experiment is initialized with two distinct perturbations - high amplitude experiments where the shock wave arrives at the maximum excursion of the perturbation, and low amplitude experiments where it arrives near its minimum. Time resolved Particle Image Velocimetry (PIV) is used as the primary flow diagnostic, yielding instantaneous velocity field estimates at a rate of 2 kHz. Measurements of the growth exponent $\theta$, where the mixing layer width $h$ is assumed to grow following $h(t) \approx t^\theta$, yield a value of $\theta\approx 0.51$ for high amplitude experiments and $\theta\approx0.45$ for low amplitude experiments following the incident shock wave when estimated using the width of the mixing layer approximated by the width of the turbulent kinetic energy containing region. Following interaction with the reflected shock wave, $\theta \approx 0.33$ for high amplitude experiments, and $\theta \approx 0.50$ for low amplitude experiments. It is observed that the low amplitude experiments grow faster than the high amplitude experiments following reshock, likely owing to the presence of steeper density gradients present in the relatively less developed mixing layer. $\theta$ is also estimated using the decay of turbulent kinetic energy for experiments where dissipation is significant. Theta estimates using both methods are found to be in good agreement for the high amplitude case following the incident shock, with $\theta\approx0.51$. $\theta \approx 0.46$ is found following reshock, which is larger than the value found when fitting $\theta$ to width data. Low amplitude experiments do not exhibit significant dissipation, and a value of $\theta \approx 0.68$ is found for low amplitude experiments following the incident shock, and $\theta \approx 0.62$ following reshock. Persistent anisotropy is a commonly observed phenomenon in the RMI mixing layer, owing to the stronger velocity perturbation components in the streamwise direction following the passage of a shock wave. High amplitude experiments are observed to reach a constant anisotropy ratio (defined as the ratio of streamwise to spanwise turbulent kinetic energy, or TKX/TKY), an indication of self-similarity, shortly following the passage of the incident shock wave with value of $\approx 1.8$. Low amplitude experiments do not reach a constant value during the experimental observation window, suggesting that the flow is still evolving even after a second shock interaction. Examination of the spanwise average anisotropy tensor reveals asymmetry in the anisotropy for low amplitude experiments, with the heavy gas exhibiting a slightly larger degree of anisotropy. The high amplitude experiments exhibit transitional outer Reynolds numbers ($Re\equiv\frac{h\Dot{h}}{\nu} > 10^4$) using the criterion proposed by Dimotakis shortly following the passage of the initial shock wave, while the low amplitude experiments largely remain below this threshold. Following reshock, both sets of experiments are elevated to $Re \approx 10^5$, which is a strong indication that mixing transition should occur and an inertial range will form. However, extended length scale analysis proposed by Zhou that accounts for the temporal evolution of scales which are a prerequisite for the formation of an inertial range indicates that neither high or low amplitude experiments have entered a transitional regime even following reshock. Furthermore, the $\theta \approx 0.5$ growth of the outer length scale in these experiments suggests that transition will not occur even if longer observation windows were possible. The lack of an inertial range is evident in spectral analysis of the mixing region. | CommonCrawl |
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
Cherenkov Telescope Array Contributions to the 35th International Cosmic Ray Conference (ICRC2017) (1709.03483)
F. Acero, B.S. Acharya, V. Acín Portella, C. Adams, I. Agudo, F. Aharonian, I. Al Samarai, A. Alberdi, M. Alcubierre, R. Alfaro, J. Alfaro, C. Alispach, R. Aloisio, R. Alves Batista, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E.O. Angüner, E. Antolini, L.A. Antonelli, V. Antonuccio, P. Antoranz, C. Aramo, M. Araya, C. Arcaro, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, A. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, S. Bajtlik, C. Balazs, M. Balbo, O. Ballester, J. Ballet, L. Ballo, A. Balzer, A. Bamba, R. Bandiera, P. Barai, C. Barbier, M. Barcelo, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, W. Bednarek, A. Belfiore, W. Benbow, M. Benito, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, S. Bernhard, K. Bernlöhr, C. Bertinelli Salucci, B. Bertucci, M.-A. Besel, V. Beshley, J. Bettane, N. Bhatt, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, R. Bird, E. Bissaldi, J. Biteau, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Boccato, C. Bockermann, C. Boehm, M. Bohacova, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, C. Boutonnet, F. Bouyjou, L. Bowman, V. Bozhilov, C. Braiding, S. Brau-Nogué, J. Bregeon, M. Briggs, A. Brill, W. Brisken, D. Bristow, R. Britto, E. Brocato, A.M. Brown, S. Brown, K. Brügge, P. Brun, P. Brun, F. Brun, L. Brunetti, G. Brunetti, P. Bruno, M. Bryan, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, A. Caccianiga, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, F. Capitanio, A. Caproni, R. Capuzzo-Dolcetta, P. Caraveo, V. Cárdenas, J. Cardenzana, M. Cardillo, C. Carlile, S. Caroff, R. Carosi, A. Carosi, E. Carquín, J. Carr, J.-M. Casandjian, S. Casanova, E. Cascone, A.J. Castro-Tirado, J. Castroviejo Mora, F. Catalani, O. Catalano, D. Cauz, C. Celestino Silva, S. Celli, M. Cerruti, E. Chabanne, P. Chadwick, N. Chakraborty, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, K. Cheng, M. Chernyakova, M. Chikawa, V.R. Chitnis, A. Christov, J. Chudoba, M. Cieślar, P. Clark, V. Coco, S. Colafrancesco, P. Colin, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, R. Cornat, J. Cortina, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, P. Cristofari, S.J. Criswell, R. Crocker, J. Croston, C. Crovari, J. Cuadra, O. Cuevas, X. Cui, P. Cumani, G. Cusumano, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, P. Da Vela, Ø. Dale, V.T. Dang, L. Dangeon, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, J.R.T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, J. Decock, C. Deil, P. Deiml, M. Del Santo, E. Delagnes, G. Deleglise, M. Delfino Reznicek, C. Delgado, J. Delgado Mengual, R. Della Ceca, D. della Volpe, M. Detournay, J. Devin, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, L. Diaz, C. Díaz, C. Dib, H. Dickinson, S. Diebold, S. Digel, A. Djannati-Ataï, M. Doert, A. Domínguez, D. Dominis Prester, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, K. Dundas Morå, A. Durkalec, V. Dwarkadas, J. Ebr, C. Eckner, E. Edy, K. Egberts, S. Einecke, J. Eisch, F. Eisenkolb, T.R.N. Ekoume, C. Eleftheriadis, D. Elsässer, D. Emmanoulopoulos, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, K. Farakos, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, E. Fiandrini, A. Fiasson, M. Filipovic, D. Fink, J.P. Finley, C. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Flores, L. Foffano, C. Föhr, M.V. Fonseca, L. Font, G. Fontaine, M. Fornasa, P. Fortin, L. Fortson, N. Fouque, B. Fraga, F.J. Franco, L. Freixas Coromina, C. Fruck, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, Y. Fukui, S. Funk, A. Furniss, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, D. Galloway, S. Gallozzi, B. Garcia, A. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, F. Gargano, C. Gargano, S. Garozzo, M. Garrido-Ruiz, D. Gascon, T. Gasparetto, F. Gaté, M. Gaug, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, A. Ghalumyan, A. Ghedina, G. Ghirlanda, P. Giammaria, F. Gianotti, B. Giebels, N. Giglietto, V. Gika, R. Gimenes, P. Giommi, F. Giordano, G. Giovannini, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, J.L. Gómez, G. Gómez-Vargas, M.M. González, J.M. González, K.S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A.J. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, V. Guarino, B. Guest, O. Gueta, S. Gunji, G. Gyuk, D. Hadasch, L. Hagge, J. Hahn, A. Hahn, H. Hakobyan, S. Hara, M.J. Hardcastle, T. Hassan, T. Haubold, A. Haupt, K. Hayashi, M. Hayashida, H. He, M. Heller, J.C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, N. Hiroshima, K. Hirotani, B. Hnatyk, J.K. Hoang, D. Hoffmann, W. Hofmann, J. Holder, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, G. Hughes, D. Hui, G. Hull, T.B. Humensky, M. Hussein, M. Hütten, M. Iarlori, Y. Ikeno, J.M. Illa, D. Impiombato, T. Inada, A. Ingallinera, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Ionica, M. Iori, A. Iriarte, K. Ishio, G.L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, F. Jankowsky, D. Jankowsky, P. Jansweijer, C. Jarnot, P. Jean, C.A. Johnson, M. Josselin, I. Jung-Richardt, J. Jurysek, P. Kaaret, P. Kachru, M. Kagaya, J. Kakuwa, O. Kalekin, R. Kankanyan, A. Karastergiou, M. Karczewski, S. Karkar, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, B. Khélifi, D.B. Kieda, T. Kihm, S. Kimeswenger, S. Kimura, S. Kisaka, S. Kishida, R. Kissmann, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, A. Kong, Y. Konno, K. Kosack, G. Kowal, S. Koyama, M. Kraus, M. Krause, F. Krauß, F. Krennrich, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, S. Kumar, H. Kuroda, J. Kushida, P. Kushwaha, N. La Palombara, V. La Parola, G. La Rosa, R. Lahmann, K. Lalik, G. Lamanna, M. Landoni, D. Landriu, H. Landt, R.G. Lang, J. Lapington, P. Laporte, O. Le Blanc, T. Le Flour, P. Le Sidaner, S. Leach, A. Leckngam, S.-H. Lee, W.H. Lee, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, M. Lemoine-Goumard, J.-P. Lenain, G. Leto, R. Lico, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Lipniacka, S. Lloyd, T. Lohse, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, T. Maccarone, E. Mach, G.M. Madejski, G. Maier, A. Majczyna, P. Majumdar, M. Makariev, G. Malaguti, A. Malouf, S. Maltezos, D. Malyshev, D. Malyshev, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, K. Mannheim, N. Maragos, D. Marano, A. Marcowith, J. Marín, M. Mariotti, M. Marisaldi, S. Markoff, J. Martí, J.-M. Martin, P. Martin, L. Martin, M. Martínez, G. Martínez, O. Martínez, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, M. Mastropietro, S. Masuda, H. Matsumoto, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, M. Mayer, D. Mazin, M.N. Mazziotta, L. Mc Comb, I. McHardy, C. Medina, A. Melandri, C. Melioli, D. Melkumyan, S. Mereghetti, J.-L. Meunier, T. Meures, M. Meyer, S. Micanovic, T. Michael, J. Michałowski, I. Mievre, J. Miller, I.A. Minaya, T. Mineo, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, C. Molijn, E. Molinari, R. Moncada, T. Montaruli, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, D. Morcuende-Parrilla, E. Moretti, K. Mori, G. Morlino, P. Morris, A. Morselli, F. Moscato, D. Motohashi, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, C. Mundell, J. Mundet, T. Murach, H. Muraishi, K. Murase, A. Murphy, A. Nagai, N. Nagar, S. Nagataki, T. Nagayoshi, B.K. Nagesh, T. Naito, D. Nakajima, T. Nakamori, Y. Nakamura, K. Nakayama, D. Naumann, P. Nayman, D. Neise, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T.T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, L. Nogues, S. Nolan, D. Nosek, M. Nöthe, B. Novosyadlyj, S. Nozaki, F. Nunio, P. O'Brien, L. Oakes, C. Ocampo, J.P. Ochoa, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, J.-F. Olive, R.A. Ong, M. Orienti, R. Orito, A. Orlati, J.P. Osborne, M. Ostrowski, N. Otte, Z. Ou, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, S. Paiano, A. Paizis, J. Palacio, M. Palatiello, M. Palatka, J. Pallotta, J.-L. Panazol, D. Paneque, M. Panter, R. Paoletti, M. Paolillo, A. Papitto, A. Paravac, J.M. Paredes, G. Pareschi, R.D. Parsons, P. Paśko, S. Pavy, A. Pe'er, M. Pech, G. Pedaletti, P. Peñil Del Campo, A. Perez, M.A. Pérez-Torres, L. Perri, M. Perri, M. Persic, A. Petrashyk, S. Petrera, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, Q. Piel, D. Pieloth, F. Pintore, C. Pio García, A. Pisarski, S. Pita, L. Pizarro, Ł. Platos, M. Pohl, V. Poireau, A. Pollo, J. Porthault, J. Poutanen, D. Pozo, E. Prandini, P. Prasit, J. Prast, K. Pressard, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, F. Queiroz, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P.J. Rajda, R. Rando, R.C. Rannot, S. Razzaque, I. Reichardt, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, T. Reposeur, B. Reville, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, M.G. Richer, T. Richtler, J. Rico, F. Rieger, M. Riquelme, P.R. Ristori, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, S. Rosen, S. Rosier Lees, J. Rousselle, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, J.E. Ruíz del Mazo, W. Rujopakarn, C. Rulten, F. Russo, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. Sæther Hatlen, S. Safi-Harb, V. Sahakian, S. Sailer, T. Saito, N. Sakaki, S. Sakurai, D. Salek, F. Salesa Greus, G. Salina, D. Sanchez, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, Y. Sato, F.G. Saturni, R. Savalle, M. Sawada, S. Schanne, E.J. Schioppa, S. Schlenstedt, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, E. Sciacca, S. Scuderi, M. Seglar-Arroyo, A. Segreto, I. Seitenzahl, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, K. Shah, A. Shalchi, M. Sharma, R.C. Shellard, I. Shilon, L. Sidoli, M. Sidz, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, B.B. Singh, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, A. Slowikowska, A. Smith, D. Sobczyńska, A. Sokolenko, H. Sol, G. Sottile, W. Springer, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, R. Sternberger, M. Sterzel, B. Stevenson, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, R. Stuik, M. Suchenek, T. Suomijarvi, A.D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, K. Takahashi, H. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, T. Tam, M. Tanaka, T. Tanaka, Y. Tanaka, S. Tanaka, C. Tanci, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, F. Temme, P. Temnikov, Y. Terada, J.C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, A. Tiengo, D. Tiziani, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, A. Tonachini, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, N. Trakarnsirinont, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Tsirou, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, M. Uslenghi, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, A.M. Van den Berg, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G.S. Varner, G. Vasileiadis, V. Vassiliev, J.R. Vázquez, M. Vázquez Acosta, M. Vecchi, A. Vega, P. Veitch, P. Venault, C. Venter, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssiere, A. Viana, J. Vicha, C. Vigorito, J. Villanueva, P. Vincent, J. Vink, F. Visconti, V. Vittorini, H. Voelk, V. Voisin, A. Vollhardt, S. Vorobiov, I. Vovk, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, P. Wagner, S.P. Wakely, T. Walstra, R. Walter, M. Ward, J.E. Ward, D. Warren, J.J. Watson, N. Webb, P. Wegner, O. Weiner, A. Weinstein, C. Weniger, F. Werner, H. Wetteskind, M. White, R. White, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, A. Wilhelm, M. Wilkinson, M. Will, D.A. Williams, M. Winter, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, T. Wu, K.K. Yadav, C. Yaguna, T. Yamamoto, H. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanin, R. Zanmar Sanchez, D. Zaric, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn
Oct. 3, 2017 astro-ph.HE
List of contributions from the Cherenkov Telescope Array Consortium presented at the 35th International Cosmic Ray Conference, July 12-20 2017, Busan, Korea.
Constraining Lorentz invariance violation using the Crab Pulsar emission observed up to TeV energies by MAGIC (1709.00346)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, W. Bednarek, W. Bhattacharyya, G. Bonnoli, S. M. Colak, S. Covino, A. De Angelis, M. Doert, M. Doro, M. Engelkemeier, D. Fidalgo, R. J. García López, M. Gaug, D. Hadasch, J. Herrera, H. Kubo, E. Lindfors, P. Majumdar, K. Mannheim, D. Mazin, A. Moralejo, M. Nievas Rosillo, K. Noda, R. Paoletti, L. Perri, I. Puljak, M. Ribó, S. Schroeder, D. Sobczynska, L. Takalo, M. Teshima, A. Treves, M. Will ETH Zurich, CH-8093 Zurich, Switzerland, Japanese MAGIC Consortium: ICRR, The University of Tokyo, 277-8582 Chiba, Department of Physics, Kyoto University, 606-8502 Kyoto, Tokai University, 259-1292 Kanagawa, The University of Tokushima, 770-8502 Tokushima, Japan, Università di Padova, INFN, I-35131 Padova, Italy, Croatian MAGIC Consortium: University of Rijeka, 51000 Rijeka, University of Split - FESB, 21000 Split, University of Zagreb - FER, 10000 Zagreb, University of Osijek, 31000 Osijek, Rudjer Boskovic Institute, 10000 Zagreb, Croatia, Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Salt Lake, Sector-1, Kolkata 700064, India, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain, University of Lódź, Department of Astrophysics, PL-90236 Lódź, Poland, , D-15738 Zeuthen, Germany, Humboldt University of Berlin, Institut für Physik, D-12489 Berlin Germany, University of Trieste, INFN Trieste, I-34127 Trieste, Italy, , The Barcelona Institute of Science, Technology, Campus UAB, E-08193 Bellaterra Università di Siena, INFN Pisa, I-53100 Siena, Italy, INAF - National Institute for Astrophysics, I-00136 Rome, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Finnish MAGIC Consortium: Tuorla Observatory, Finnish Centre of Astronomy with ESO, University of Turku, Vaisalantie 20, FI-21500 Piikkiö, Astronomy Division, University of Oulu, FIN-90014 University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, Bulgarian Academy of Sciences, BG-1784 Sofia, Bulgaria, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, , E-08193 Barcelona, Spain)
Sept. 1, 2017 astro-ph.HE
Spontaneous breaking of Lorentz symmetry at energies on the order of the Planck energy or lower is predicted by many quantum gravity theories, implying non-trivial dispersion relations for the photon in vacuum. Consequently, gamma-rays of different energies, emitted simultaneously from astrophysical sources, could accumulate measurable differences in their time of flight until they reach the Earth. Such tests have been carried out in the past using fast variations of gamma-ray flux from pulsars, and more recently from active galactic nuclei and gamma-ray bursts. We present new constraints studying the gamma-ray emission of the galactic Crab Pulsar, recently observed up to TeV energies by the MAGIC collaboration. A profile likelihood analysis of pulsar events reconstructed for energies above 400GeV finds no significant variation in arrival time as their energy increases. Ninety-five percent~CL limits are obtained on the effective Lorentz invariance violating energy scale at the level of $E_{\mathrm{QG}_1} > 5.5\cdot 10^{17}$GeV ($4.5\cdot 10^{17}$GeV) for a linear, and $E_{\mathrm{QG}_2} > 5.9\cdot 10^{10}$GeV ($5.3\cdot 10^{10}$GeV) for a quadratic scenario, for the subluminal and the superluminal cases, respectively. A substantial part of this study is dedicated to calibration of the test statistic, with respect to bias and coverage properties. Moreover, the limits take into account systematic uncertainties, found to worsen the statistical limits by about 36--42\%. Our constraints would have resulted much more competitive if the intrinsic pulse shape of the pulsar between 200GeV and 400GeV was understood in sufficient detail and allowed inclusion of events well below 400GeV.
Search for very-high-energy gamma-ray emission from the microquasar Cygnus X-1 with the MAGIC telescopes (1708.03689)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, J. Becerra González, W. Bhattacharyya, S. Bonnefoy, A. Chatterjee, S. Covino, A. De Angelis, M. Doert, M. Doro, M. Engelkemeier, D. Fidalgo, R. J. García López, P. Giammaria, D. Hadasch, J. Hose, J. Kushida, S. Lombardi, P. Majumdar, L. Maraschi, U. Menzel, E. Moretti, M. Nievas Rosillo, L. Nogués, R. Paoletti, M. Peresano, E. Prandini, W. Rhode, K. Satalecka, I. Šnidarić, L. Takalo, D. Tescaro, A. Treves, M. Will, S. A. Trushkin ETH Zurich, CH-8093 Zurich, Switzerland, INAF - National Institute for Astrophysics, viale del Parco Mellini, 84, I-00136 Rome, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split - FESB, University of Zagreb - FER, University of Osijek, Croatia, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Salt Lake, Sector-1, Kolkata 700064, India, Max-Planck-Institut für Physik, D-80805 München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain, Deutsches Elektronen-Synchrotron Institut de Fisica d'Altes Energies, The Barcelona Institute of Science, Technology, Campus UAB, 08193 Bellaterra Università di Siena, INFN Pisa, I-53100 Siena, Italy, Institute for Space Sciences Technische Universität Dortmund, D-44221 Dortmund, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, ICRR, The University of Tokyo, Department of Physics, Hakubi Center, Kyoto University, Tokai University, The University of Tokushima, Japan, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, Institute for Space Sciences also at the Department of Physics of Kyoto University, Japan, now at Centro Brasileiro de Pesquisas Físicas, R. Dr. Xavier Sigaud, 150 - Urca, Rio de Janeiro - RJ, 22290-180, Brazil, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, Humboldt University of Berlin, Institut für Physik Newtonstr. 15, 12489 Berlin Germany, also at University of Trieste, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, Dept. of Physics & Astronomy, University of Bologna, Departament d'Astronomia i Metereologia, Institut de Ciènces del Cosmos, Universtitat de Barcelona, Barcelona, Spain, Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE, UK, Special astrophysical Observatory RAS, Nizhnij Arkhys, Karachaevo-Cherkassia, Russia, Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany)
Aug. 11, 2017 astro-ph.HE
The microquasar Cygnus X-1 displays the two typical soft and hard X-ray states of a black-hole transient. During the latter, Cygnus X-1 shows a one-sided relativistic radio-jet. Recent detection of the system in the high energy (HE; $E\gtrsim60$ MeV) gamma-ray range with \textit{Fermi}-LAT associates this emission with the outflow. Former MAGIC observations revealed a hint of flaring activity in the very high-energy (VHE; $E\gtrsim100$ GeV) regime during this X-ray state. We analyze $\sim97$ hr of Cygnus X-1 data taken with the MAGIC telescopes between July 2007 and October 2014. To shed light on the correlation between hard X-ray and VHE gamma rays as previously suggested, we study each main X-ray state separately. We perform an orbital phase-folded analysis to look for variability in the VHE band. Additionally, to place this variability behavior in a multiwavelength context, we compare our results with \textit{Fermi}-LAT, \textit{AGILE}, \textit{Swift}-BAT, \textit{MAXI}, \textit{RXTE}-ASM, AMI and RATAN-600 data. We do not detect Cygnus X-1 in the VHE regime. We establish upper limits for each X-ray state, assuming a power-law distribution with photon index $\Gamma=3.2$. For steady emission in the hard and soft X-ray states, we set integral upper limits at 95\% confidence level for energies above 200 GeV at $2.6\times10^{-12}$~photons cm$^{-2}$s$^{-1}$ and $1.0\times10^{-11}$~photons cm$^{-2}$s$^{-1}$, respectively. We rule out steady VHE gamma-ray emission above this energy range, at the level of the MAGIC sensitivity, originating in the interaction between the relativistic jet and the surrounding medium, while the emission above this flux level produced inside the binary still remains a valid possibility.
MAGIC observations of the microquasar V404 Cygni during the 2015 outburst (1707.00887)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, C. Arcaro, A. Babić, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, R. Carosi, A. Carosi, A. Chatterjee, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Cumani, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, M. Gaug, P. Giammaria, N. Godinović, D. Gora, S. Griffiths, D. Guberman, D. Hadasch, A. Hahn, T. Hassan, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, K. Ishio, Y. Konno, H. Kubo, J. Kushida, D. Kuveždić, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, C. Maggio, P. Majumdar, M. Makariev, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, M. Minev, R. Mirzoyan, A. Moralejo, V. Moreno, E. Moretti, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, D. Ninci, K. Nishijima, K. Noda, L. Nogués, S. Paiano, J. Palacio, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, A. Sillanpää, J. Sitarek, I. Šnidarić, D. Sobczynska, A. Stamerra, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, D. F. Torres, N. Torres-Albà, A. Treves, G. Vanzo, M. Vazquez Acosta, I. Vovk, J. E. Ward, M. Will, D. Zarić, A. Loh, J. Rodriguez
July 4, 2017 astro-ph.HE
The microquasar V404 Cygni underwent a series of outbursts in 2015, June 15-31, during which its flux in hard X-rays (20-40 keV) reached about 40 times the Crab Nebula flux. Because of the exceptional interest of the flaring activity from this source, observations at several wavelengths were conducted. The MAGIC telescopes, triggered by the INTEGRAL alerts, followed-up the flaring source for several nights during the period June 18-27, for more than 10 hours. One hour of observation was conducted simultaneously to a giant 22 GHz radio flare and a hint of signal at GeV energies seen by Fermi-LAT. The MAGIC observations did not show significant emission in any of the analysed time intervals. The derived flux upper limit, in the energy range 200--1250 GeV, is 4.8$\times 10^{-12}$ ph cm$^{-2}$ s$^{-1}$. We estimate the gamma-ray opacity during the flaring period, which along with our non-detection, points to an inefficient acceleration in the V404\,Cyg jets if VHE emitter is located further than $1\times 10^{10}$ cm from the compact object.
Observation of the Black Widow B1957+20 millisecond pulsar binary system with the MAGIC telescopes (1706.01378)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, J. Becerra González, B. Biasuzzi, G. Bonnoli, A. Chatterjee, J. Cortina, A. De Angelis, M. Doert, M. Doro, M. Engelkemeier, D. Fidalgo, R. J. García López, P. Giammaria, D. Guberman, M. Hayashida, K. Ishio, D. Lelas, M. López, K. Mannheim, D. Mazin, E. Moretti, K. Nilsson, S. Paiano, J. M. Paredes, L. Perri, E. Prandini, M. Ribó, S. Schroeder, I. Šnidarić, T. Surić, T. Terzić, N. Torres-Albà, J. E. Ward, L. Guillemot ETH Zurich, Institute for Particle Physics, Zurich, Switzerland, Università di Udine, INFN, sezione di Trieste, Italy, Udine, Italy, INAF - National Institute for Astrophysics, Roma, Italy, Dipartimento di Fisica ed Astronomia, Università di Padova, INFN sez. di Padova, Padova, Italy, Croatian MAGIC Consortium: Rudjer Boskovic Institute, University of Rijeka, University of Split - FESB, University of Zagreb-FER, University of Osijek, Split, Croatia, Saha Institute of Nuclear Physics, HBNI, Kolkata, India, Max-Planck-Institut für Physik, München, Germany, Grupo de Altas Energias, Universidad Complutense, Madrid, Madrid, Spain, Instituto de Astrofisica de Canarias, La Laguna Division of Astrophysics, University of Lodz, Lodz, Poland, Zeuthen, Zeuthen, Germany, , The Barcelona Institute of Science, Technology, Bellaterra Dipartimento di Fisica, Università di Siena, INFN sez. di Pisa, Siena, Italy, Institut für Theoretische Physik und Astrophysik - Fakultät für Physik und Astronomie - Universität Würzburg, Würzburg, Germany, Technische Universität Dortmund, Dortmund, Germany, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Piikkiö, Finland, Universitat Autònoma de Barcelona, Barcelona, Spain, Universitat de Barcelona, Barcelona, Spain, Institute for Nuclear Research, Nuclear Energy, Sofia, Bulgaria, Universita di Pisa, INFN Pisa, Pisa, Italy, ICREA, Institut de Ciencies de l'Espai also at the Department of Physics of Kyoto University, Japan, now at Centro Brasileiro de Pesquisas Físicas, R. Dr. Xavier Sigaud, 150 - Urca, Rio de Janeiro - RJ, 22290-180, Brazil, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, Humboldt University of Berlin, Institut für Physik Newtonstr. 15, 12489 Berlin Germany, now at Ecole polytechnique fédérale de Lausanne also at Japanese MAGIC Consortium, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, Dept. of Physics, Astronomy, University of Bologna, Laboratoire de Physique et Chimie de l'Environnement et de l'Espace, LPC2E, CNRS-Universite d'Orleans, F-45071 Orleans, France, Station de Radioastronomie de Nancay, Observatoire de Paris, CNRS/INSU, F-18330 Nancay, France)
June 5, 2017 astro-ph.HE
B1957+20 is a millisecond pulsar located in a black widow type compact binary system with a low mass stellar companion. The interaction of the pulsar wind with the companion star wind and/or the interstellar plasma is expected to create plausible conditions for acceleration of electrons to TeV energies and subsequent production of very high energy {\gamma} rays in the inverse Compton process. We performed extensive observations with the MAGIC telescopes of B1957+20. We interpret results in the framework of a few different models, namely emission from the vicinity of the millisecond pulsar, the interaction of the pulsar and stellar companion wind region, or bow shock nebula. No significant steady very high energy {\gamma}-ray emission was found. We derived a 95% confidence level upper limit of 3.0 x 10 -12 cm -2 s -1 on the average {\gamma}-ray emission from the binary system above 200 GeV. The upper limits obtained with MAGIC constrain, for the first time, different models of the high-energy emission in B1957+20. In particular, in the inner mixed wind nebula model with mono-energetic injection of electrons, the acceleration efficiency of electrons is constrained to be below ~(2-10)% of the pulsar spin down power. For the pulsar emission, the obtained upper limits for each emission peak are well above the exponential cut-off fits to the Fermi-LAT data, extrapolated to energies above 50 GeV. The MAGIC upper limits can rule out a simple power-law tail extension through the sub-TeV energy range for the main peak seen at radio frequencies.
Very-high-energy gamma-ray observations of the Type Ia Supernova SN 2014J with the MAGIC telescopes (1702.07677)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, J. Becerra González, A. Berti, G. Bonnoli, A. Chatterjee, J. Cortina, A. De Angelis, M. Doert, M. Doro, M. Engelkemeier, D. Fidalgo, D. Galindo, D. Garrido Terrats, D. Gora, J. Herrera, K. Kodani, A. La Barbera, M. López, K. Mallot, L. Maraschi, D. Mazin, E. Moretti, M. Nievas Rosillo, L. Nogués, D. Paneque, G. Pedaletti, J. Poutanen, J. R. Garcia, T. Saito, A. Sillanpää, A. Stamerra, F. Tavecchio, D. F. Torres, M. Vazquez Acosta, R. Zanin Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Università di Padova, INFN, I-35131 Padova, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, University of Zagreb, Croatia, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Salt Lake, Sector-1, Kolkata 700064, India, Max-Planck-Institut für Physik, D-80805 München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, , D-15738 Zeuthen, Germany, , The Barcelona Institute of Science, Technology, Campus UAB, 08193 Bellaterra Universität Würzburg, D-97074 Würzburg, Germany, , E-08193 Barcelona, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, ICRR, The University of Tokyo, Department of Physics, Hakubi Center, Kyoto University, Tokai University, The University of Tokushima, Japan, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, ICREA, Institute for Space Sciences, E-08193 Barcelona, Spain, now at Centro Brasileiro de Pesquisas Físicas, R. Dr. Xavier Sigaud, 150 - Urca, Rio de Janeiro - RJ, 22290-180, Brazil, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, Humboldt University of Berlin, Institut für Physik Newtonstr. 15, 12489 Berlin Germany, now at Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland, now at Max-Planck-Institut fur Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, Dept. of Physics & Astronomy, University of Bologna, also at ISDC - Science Data Center for Astrophysics, 1290, Versoix
Feb. 24, 2017 astro-ph.HE
In this work we present data from observations with the MAGIC telescopes of SN 2014J detected in January 21 2014, the closest Type Ia supernova since Imaging Air Cherenkov Telescopes started to operate. We probe the possibility of very-high-energy (VHE; $E\geq100$ GeV) gamma rays produced in the early stages of Type Ia supernova explosions. We performed follow-up observations after this supernova explosion for 5 days, between January 27 and February 2 in 2014. We search for gamma-ray signal in the energy range between 100 GeV and several TeV from the location of SN 2014J using data from a total of $\sim5.5$ hours of observations. Prospects for observing gamma-rays of hadronic origin from SN 2014J in the near future are also being addressed. No significant excess was detected from the direction of SN 2014J. Upper limits at 95$\%$ confidence level on the integral flux, assuming a power-law spectrum, d$F/$d$E\propto E^{-\Gamma}$, with a spectral index of $\Gamma=2.6$, for energies higher than 300 GeV and 700 GeV, are established at $1.3\times10^{-12}$ and $4.1\times10^{-13}$ photons~cm$^{-2}$s$^{-1}$, respectively. For the first time, upper limits on the VHE emission of a Type Ia supernova are established. The energy fraction isotropically emitted into TeV gamma rays during the first $\sim10$ days after the supernova explosion for energies greater than 300 GeV is limited to $10^{-6}$ of the total available energy budget ($\sim 10^{51}$ erg). Within the assumed theoretical scenario, the MAGIC upper limits on the VHE emission suggest that SN 2014J will not be detectable in the future by any current or planned generation of Imaging Atmospheric Cherenkov Telescopes.
Observations of Sagittarius A* during the pericenter passage of the G2 object with MAGIC (1611.07095)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, C. Arcaro, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinović, A. González Muñoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogués, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin
Nov. 21, 2016 astro-ph.HE
Context. We present the results of a multi-year monitoring campaign of the Galactic Center (GC) with the MAGIC telescopes. These observations were primarily motivated by reports that a putative gas cloud (G2) would be passing in close proximity to the super-massive black hole (SMBH), associated with Sagittarius A*, located at the center of our galaxy. This event was expected to give astronomers a unique chance to study the effect of in-falling matter on the broad-band emission of a SMBH. Aims. We search for potential flaring emission of very-high-energy (VHE; $\geq$100 GeV) gamma rays from the direction of the SMBH at the GC due to the passage of the G2 object. Using these data we also study the morphology of this complex region. Methods. We observed the GC region with the MAGIC Imaging Atmospheric Cherenkov Telescopes during the period 2012-2015, collecting 67 hours of good-quality data. In addition to a search for variability in the flux and spectral shape of the GC gamma-ray source, we use a point-source subtraction technique to remove the known gamma-ray emitters located around the GC in order to reveal the TeV morphology of the extended emission inside that region. Results. No effect of the G2 object on the VHE gamma-ray emission from the GC was detected during the 4 year observation campaign. We confirm previous measurements of the VHE spectrum of Sagittarius A*, and do not detect any significant variability of the emission from the source. Furthermore, the known VHE gamma-ray emitter at the location of the supernova remnant G0.9+0.1 was detected, as well as the recently discovered VHE source close to the GG radio Arc.
A search for spectral hysteresis and energy-dependent time lags from X-ray and TeV gamma-ray observations of Mrk 421 (1611.04626)
A. U. Abeysekara, S. Archambault, A. Archer, W. Benbow, R. Bird, M. Buchovecky, J. H. Buckley, V. Bugaev, J. V Cardenzana, M. Cerruti, X. Chen, L. Ciupik, M. P. Connolly, W. Cui, J. D. Eisch, A. Falcone, Q. Feng, J. P. Finley, H. Fleischhack, A. Flinders, L. Fortson, A. Furniss, S. Griffin, M. Hütten, N. Håkansson, D. Hanna, O. Hervet, J. Holder, T. B. Humensky, P. Kaaret, P. Kar, M. Kertzman, D. Kieda, M. Krause, S. Kumar, M. J. Lang, G. Maier, S. McArthur, A. McCann, K. Meagher, P. Moriarty, R. Mukherjee, D. Nieto, S. O'Brien, R. A. Ong, A. N. Otte, N. Park, V. Pelassa, M. Pohl, A. Popkow, E. Pueschel, K. Ragan, P. T. Reynolds, G. T. Richards, E. Roache, I. Sadeh, M. Santander, G. H. Sembroski, K. Shahinyan, D. Staszak, I. Telezhinsky, J. V. Tucci, J. Tyler, S. P. Wakely, A. Weinstein, A. Wilhelm, D. A. Williams, M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, C. Arcaro, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, R. Carosi, A. Carosi, A. Chatterjee, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Cumani, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, M. Gaug, P. Giammaria, N. Godinović, D. Gora, D. Guberman, D. Hadasch, A. Hahn, T. Hassan, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogués, M. Nöthe, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, D. F. Torres, N. Torres-Albà, T. Toyama, A. Treves, G. Vanzo, M. Vazquez Acosta, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin, T. Hovatta, I. de la Calle Perez, P. S. Smith, E. Racero, M. Baloković
Blazars are variable emitters across all wavelengths over a wide range of timescales, from months down to minutes. It is therefore essential to observe blazars simultaneously at different wavelengths, especially in the X-ray and gamma-ray bands, where the broadband spectral energy distributions usually peak. In this work, we report on three "target-of-opportunity" (ToO) observations of Mrk 421, one of the brightest TeV blazars, triggered by a strong flaring event at TeV energies in 2014. These observations feature long, continuous, and simultaneous exposures with XMM-Newton (covering X-ray and optical/ultraviolet bands) and VERITAS (covering TeV gamma-ray band), along with contemporaneous observations from other gamma-ray facilities (MAGIC and Fermi-LAT) and a number of radio and optical facilities. Although neither rapid flares nor significant X-ray/TeV correlation are detected, these observations reveal subtle changes in the X-ray spectrum of the source over the course of a few days. We search the simultaneous X-ray and TeV data for spectral hysteresis patterns and time delays, which could provide insight into the emission mechanisms and the source properties (e.g. the radius of the emitting region, the strength of the magnetic field, and related timescales). The observed broadband spectra are consistent with a one-zone synchrotron self-Compton model. We find that the power spectral density distribution at $\gtrsim 4\times 10^{-4}$ Hz from the X-ray data can be described by a power-law model with an index value between 1.2 and 1.8, and do not find evidence for a steepening of the power spectral index (often associated with a characteristic length scale) compared to the previously reported values at lower frequencies.
Very High-Energy Gamma-Ray Follow-Up Program Using Neutrino Triggers from IceCube (1610.01814)
IceCube Collaboration: M.G. Aartsen, K. Abraham, M. Ackermann, J.Adams, J.A. Aguilar, M. Ahlers, M.Ahrens, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G.Anton, M. Archinger, C. Arguelles, J.Auffenberg, S. Axani, X. Bai, S.W. Barwick, V. Baum, R. Bay, J.J. Beatty, J.Becker-Tjus, K.-H.Becker, S. BenZvi, D. Berley, E. Bernardini, A.Bernhard, D.Z. Besson, G. Binder, D. Bindig, M.Bissok, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, S. Bron, A. Burgman, T. Carver, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G.H. Collin, J.M. Conrad, D.F. Cowen R. Cross, M. Day, J.P.A.M. de Andre, C.De Clercq, E.del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K.D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J.C. Diaz-Velez, V. di Lorenzo, H.Dujmovic, J.P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, S. Euler, P.A. Evenson, S. Fahey, A.R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C.Finley, S. Flis, C.-C. Fosig, A. Franckowiak, R. Franke, E. Friedman, T. Fuchs, T.K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, L. Gladstone, T. Glauch, T. Glusenkamp, A. Goldschmidt, G. Golup, J.G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, E. Hansen, T. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G.C. Hill, K.D. Hoffman, R. Hoffmann, K. Holzapfel, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G.S. Japaridze, M. Jeong, K. Jero, B.J.P. Jones, M. Jurkovic, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J.L. Kelley, A. Kheirandish, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S.R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Kopke, C. Kopper, S. Kopper, D.J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, C. Kruger, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, M. Labare, J.L. Lanfranchi, M.J. Larson, F. Lauber, D. Lennarz, M. Lesiak-Bzdak, M. Leuermann, L. Lu, J. Lunemann, J. Madsen, G. Maggi, K.B.M. Mahn, S. Mancina, M. Mandelartz, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, L. Mohrmann, T. Montaruli, M. Moulai, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S.C. Nowicki, D.R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D.V. Pankova, P. Peiffer, O. Penek, J.A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, P.B. Price, G.T. Przybylski, M. Quinnan, C. Raab, L. Radel, M. Rameez, K. Rawlins, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D.Ryckbosch, D. Rysewyk, L.Sabbatini, S.E. Sanchez-Herrera, A. Sandrock, J. Sandroos, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G.M. Spiczak, C. Spiering, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R.G. Stokstad, A. Stossl, R. Strom, N.L. Strotjohann, G.W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P.A. Toale, M.N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, M. van Rossem, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, M.J. Weiss, C. Wendt, S. Westerhoff, B.J. Whelan, S. Wickmann, K. Wiebe, C.H. Wiebusch, L. Wille, D.R. Williams, L. Wills, M. Wolf, T.R. Wood, E. Woolsey, K. Woschnagg, D.L. Xu, X.W. Xu, Y. Xu, J.P. Yanez, G. Yodh, S. Yoshida, M. Zoll MAGIC Collaboration: M.L. Ahnen, S. Ansoldi, L.A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U.Barres de Almeida, J.A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J.L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Ona Wilhelmi, F. Di Pierro, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernandez-Barral, D. Fidalgo, M.V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. Gonzalez Munoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J.M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogues, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J.M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P.G. Prada Moroni, E.Prandini, I. Puljak, I. Reichardt, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, S. Schroeder, C. Schultz, T. Schweizer, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T.Terzic, D. Tescaro, M. Teshima, J. Thaele, D.F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J.E. Ward, M. Will, M.H. Wu, R. Zanin VERITAS Collaboration: A.U. Abeysekara, S. Archambault, A. Archer, W. Benbow, R. Bird, E. Bourbeau, M. Buchovecky, V. Bugaev, K. Byrum, J.V Cardenzana, M. Cerruti, L. Ciupik, M.P. Connolly, W. Cui, H.J. Dickinson, J. Dumm, J.D. Eisch, M. Errando, A. Falcone, Q. Feng, J.P. Finley, H. Fleischhack, A. Flinders, L. Fortson, A. Furniss, G.H. Gillanders, S. Griffin, J. Grube, M. Hutten, N. Haakansson, O. Hervet, J. Holder, T.B. Humensky, C.A. Johnson, P. Kaaret, P. Kar, N. Kelley-Hoskins, M. Kertzman, D. Kieda, M. Krause, F. Krennrich, S. Kumar, M.J. Lang, G. Maier, S. McArthur, A. McCann, P. Moriarty, R. Mukherjee, T. Nguyen, D. Nieto, S. O'Brien, R.A. Ong, A.N. Otte, N. Park, M. Pohl, A. Popkow, E. Pueschel, J. Quinn, K. Ragan, P.T. Reynolds, G.T. Richards, E. Roache, C. Rulten, I. Sadeh, M. Santander, G.H. Sembroski, K. Shahinyan, D. Staszak, I. Telezhinsky, J.V. Tucci, J. Tyler, S.P. Wakely, A. Weinstein, P. Wilcox, A. Wilhelm, D.A. Williams, B. Zitzer
Nov. 12, 2016 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE
We describe and report the status of a neutrino-triggered program in IceCube that generates real-time alerts for gamma-ray follow-up observations by atmospheric-Cherenkov telescopes (MAGIC and VERITAS). While IceCube is capable of monitoring the whole sky continuously, high-energy gamma-ray telescopes have restricted fields of view and in general are unlikely to be observing a potential neutrino-flaring source at the time such neutrinos are recorded. The use of neutrino-triggered alerts thus aims at increasing the availability of simultaneous multi-messenger data during potential neutrino flaring activity, which can increase the discovery potential and constrain the phenomenological interpretation of the high-energy emission of selected source classes (e.g. blazars). The requirements of a fast and stable online analysis of potential neutrino signals and its operation are presented, along with first results of the program operating between 14 March 2012 and 31 December 2015.
Contributions of the Cherenkov Telescope Array (CTA) to the 6th International Symposium on High-Energy Gamma-Ray Astronomy (Gamma 2016) (1610.05151)
The CTA Consortium: A. Abchiche, U. Abeysekara, Ó. Abril, F. Acero, B. S. Acharya, C. Adams, G. Agnetta, F. Aharonian, A. Akhperjanian, A. Albert, M. Alcubierre, J. Alfaro, R. Alfaro, A. J. Allafort, R. Aloisio, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E. O. Angüner, E. Antolini, L. A. Antonelli, M. Antonucci, V. Antonuccio, P. Antoranz, C. Aramo, A. Aravantinos, M. Araya, C. Arcaro, B. Arezki, A. Argan, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, A. Bais, S. Bajtlik, C. Balazs, M. Balbo, D. Balis, C. Balkowski, O. Ballester, J. Ballet, A. Balzer, A. Bamba, R. Bandiera, A. Barber, C. Barbier, M. Barcelo, M. Barkov, A. Barnacka, U. Barres de Almeida, J. A. Barrio, S. Basso, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, V. Beckmann, W. Bednarek, W. Benbow, D. Benedico Ventura, J. Berdugo, D. Berge, E. Bernardini, M. G. Bernardini, S. Bernhard, K. Bernlöhr, B. Bertucci, M.-A. Besel, V. Beshley, N. Bhatt, P. Bhattacharjee, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, W. Bilnik, B. Biondo, R. Bird, T. Bird, E. Bissaldi, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Bockermann, C. Boehm, L. Bogacz, M. Bogdan, M. Bohacova, C. Boisson, J. Boix, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, P. Bonifacio, F. Bonnarel, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, J.-J. Bousquet, C. Boutonnet, F. Bouyjou, L. Bowman, C. Braiding, T. Brantseg, S. Brau-Nogué, J. Bregeon, M. Briggs, M. Brigida, T. Bringmann, W. Brisken, D. Bristow, R. Britto, E. Brocato, S. Bron, P. Brook, W. Brooks, A. M. Brown, K. Brügge, F. Brun, P. Brun, P. Brun, G. Brunetti, L. Brunetti, P. Bruno, T. Buanes, N. Bucciantini, G. Buchholtz, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, F. Cadoux, J. Calvo Tovar, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, G. Capobianco, A. Caproni, P. Caraveo, J. Cardenzana, M. Cardillo, S. Carius, C. Carlile, A. Carosi, R. Carosi, E. Carquín, J. Carr, M. Carroll, J. Carter, P.-H. Carton, J.-M. Casandjian, S. Casanova, S. Casanova, E. Cascone, M. Casiraghi, A. Castellina, J. Castroviejo Mora, F. Catalani, O. Catalano, S. Catalanotti, D. Cauz, S. Cavazzani, P. Cerchiara, E. Chabanne, P. Chadwick, T. Chaleil, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, X. Chen, K. Cheng, M. Chernyakova, L. Chiappetti, M. Chikawa, D. Chinn, V. R. Chitnis, N. Cho, A. Christov, J. Chudoba, M. Cieślar, M. A. Ciocci, R. Clay, S. Colafrancesco, P. Colin, J.-M. Colley, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, S. Connell, J. Conrad, J. L. Contreras, P. Coppi, S. Corbel, J. Coridian, R. Cornat, P. Corona, D. Corti, J. Cortina, L. Cossio, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, G. Crimi, S. J. Criswell, R. Crocker, J. Croston, J. Cuadra, P. Cumani, G. Cusumano, P. Da Vela, Ø. Dale, F. D'Ammando, D. Dang, V. T. Dang, L. Dangeon, M. Daniel, I. Davids, I. Davids, B. Dawson, F. Dazzi, B. de Aguiar Costa, A. De Angelis, R. F. de Araujo Cardoso, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E. M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, J. R. T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, G. Decock, J. Decock, C. Deil, M. Del Santo, E. Delagnes, G. Deleglise, C. Delgado, J. Delgado, D. della Volpe, P. Deloye, M. Detournay, A. Dettlaff, J. Devin, T. Di Girolamo, C. Di Giulio, A. Di Paola, F. Di Pierro, M. A. Diaz, C. Díaz, C. Dib, J. Dick, H. Dickinson, S. Diebold, S. Digel, J. Dipold, G. Disset, A. Distefano, A. Djannati-Ataï, M. Doert, M. Dohmke, A. Domínguez, N. Dominik, J.-L. Dominique, D. Dominis Prester, A. Donat, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, K. Doyle, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, L. Ducci, D. Dumas, K. Dundas Morå, D. Durand, D. D'Urso, V. Dwarkadas, J. Dyks, M. Dyrda, J. Ebr, E. Edy, K. Egberts, P. Eger, A. Egorov, S. Einecke, J. Eisch, F. Eisenkolb, C. Eleftheriadis, D. Elsaesser, D. Elsässer, D. Emmanoulopoulos, C. Engelbrecht, D. Engelhaupt, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, D. Fantinel, K. Farakos, C. Farnier, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, D. Ferenc, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, P. Fetfatzis, E. Fiandrini, A. Fiasson, A. Filipčič, M. Filipovic, D. Fink, C. Finley, J. P. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Fleischhack, H. Flores, D. Florin, C. Föhr, E. Fokitis, M. V. Fonseca, L. Font, G. Fontaine, B. Fontes, M. Fornasa, M. Fornasa, A. Förster, P. Fortin, L. Fortson, N. Fouque, A. Franckowiak, A. Franckowiak, F. J. Franco, I. Freire Mota Albuquerque, L. Freixas Coromina, L. Fresnillo, C. Fruck, M. Fuessling, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, T. Fukuda, Y. Fukui, S. Funk, A. Furniss, W. Gäbele, S. Gabici, A. Gadola, D. Galindo, D. D. Gall, Y. Gallant, D. Galloway, S. Gallozzi, J. A. Galvez, S. Gao, A. Garcia, B. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, C. Gargano, F. Gargano, S. Garozzo, F. Garrecht, L. Garrido, M. Garrido-Ruiz, D. Gascon, J. Gaskins, J. Gaudemard, M. Gaug, J. Gaweda, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, L. Gerard, A. Ghalumyan, A. Ghedina, P. Ghislain, P. Giammaria, E. Giannakaki, F. Gianotti, S. Giarrusso, G. Giavitto, B. Giebels, T. Gieras, N. Giglietto, V. Gika, R. Gimenes, M. Giomi, P. Giommi, F. Giordano, G. Giovannini, P. Girardot, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, G. Gomez, M. M. Gonzalez, A. González, D. Gora, K. S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A. J. Green, A. M. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, M. Grudzinska, J. Grygorczuk, V. Guarino, D. Guberman, S. Gunji, G. Gyuk, D. Hadasch, A. Hagedorn, L. Hagge, J. Hahn, H. Hakobyan, S. Hara, M. J. Hardcastle, T. Hassan, K. Hatanaka, T. Haubold, A. Haupt, T. Hayakawa, M. Hayashida, M. Heller, R. Heller, J. C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, W. Hirai, K. Hirotani, B. Hnatyk, J. Hoang, D. Hoffmann, W. Hofmann, T. Holch, J. Holder, S. Hooper, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Hose, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, M. Huetten, G. Hughes, D. Hui, T. B. Humensky, M. Hussein, M. Iacovacci, A. Ibarra, Y. Ikeno, J. M. Illa, D. Impiombato, T. Inada, S. Incorvaia, L. Infante, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, K. Ishio, G. L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, M. Janiak, D. Jankowsky, F. Jankowsky, P. Jean, I. Jegouzo, P. Jenke, J. J. Jimenez, M. Jingo, M. Jingo, L. Jocou, T. Jogler, C. A. Johnson, M. Jones, M. Josselin, L. Journet, I. Jung, P. Kaaret, M. Kagaya, J. Kakuwa, O. Kalekin, C. Kalkuhl, H. Kamon, R. Kankanyan, A. Karastergiou, K. Kärcher, M. Karczewski, S. Karkar, P. Karn, J. Kasperek, H. Katagiri, J. Kataoka, K. Katarzyński, S. Kato, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, J. Kersten, B. Khélifi, D. B. Kieda, T. Kihm, S. Kimeswenger, S. Kisaka, S. Kishida, R. Kissmann, S. Klepser, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, F. Köck, J. Kocot, K. Kohri, K. Kokkotas, K. Kokkotas, D. Kolitzus, N. Komin, I. Kominis, A. Kong, Y. Konno, K. Kosack, G. Koss, M. Kossatz, G. Kowal, S. Koyama, J. Kozioł, M. Kraus, J. Krause, M. Krause, H. Krawzcynski, F. Krennrich, A. Kretzschmann, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, M. Kuklis, H. Kuroda, J. Kushida, A. La Barbera, N. La Palombara, V. La Parola, G. La Rosa, H. Laffon, R. Lahmann, M. Lakicevic, K. Lalik, G. Lamanna, D. Landriu, H. Landt, R. G. Lang, J. Lapington, P. Laporte, J.-P. Le Fèvre, T. Le Flour, P. Le Sidaner, S.-H. Lee, W. H. Lee, J.-P. Lees, J. Lefaucheur, K. Leffhalm, H. Leich, M. A. Leigui de Oliveira, D. Lelas, A. Lemière, M. Lemoine-Goumard, J.-P. Lenain, R. Leonard, R. Leoni, L. Lessio, G. Leto, A. Leveque, B. Lieunard, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Liolios, A. Lipniacka, H. Lockart, T. Lohse, E. Łokas, S. Lombardi, F. Longo, A. Lopatin, M. Lopez, D. Loreggia, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, H. Lüdecke, T. Luigi, P. L. Luque-Escamilla, E. Lyard, M. C. Maccarone, T. Maccarone, T. J. Maccarone, E. Mach, G. M. Madejski, A. Madonna, F. Magniette, A. Magniez, M. Mahabir, G. Maier, P. Majumdar, P. Majumdar, M. Makariev, G. Malaguti, G. Malaspina, A. K. Mallot, A. Malouf, S. Maltezos, D. Malyshev, A. Mancilla, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, N. Mankushiyil, K. Mannheim, N. Maragos, D. Marano, P. Marchegiani, J. A. Marcomini, A. Marcowith, M. Mariotti, M. Marisaldi, S. Markoff, C. Martens, J. Martí, J.-M. Martin, L. Martin, P. Martin, G. Martínez, M. Martínez, O. Martínez, K. Martynyuk-Lototskyy, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, S. Mastroianni, M. Mastropietro, S. Masuda, H. Matsumoto, S. Matsuoka, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, N. Maxted, J. Maya, M. Mayer, D. Mazin, M. N. Mazziotta, L. Mc Comb, N. McCubbin, I. McHardy, C. Medina, F. Mehrez, C. Melioli, D. Melkumyan, T. Melse, S. Mereghetti, M. Merk, P. Mertsch, J.-L. Meunier, T. Meures, M. Meyer, J. L. Meyrelles jr, A. Miccichè, T. Michael, J. Michałowski, P. Mientjes, I. Mievre, A. Mihailidis, J. Miller, T. Mineo, M. Minuti, N. Mirabal, F. Mirabel, J. M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, I. Mognet, M. Mohammed, R. Moharana, L. Mohrmann, E. Molinari, P. Molyneux, E. Monmarthe, G. Monnier, T. Montaruli, C. Monte, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, C. Morello, E. Moretti, K. Mori, P. Morris, A. Morselli, F. Moscato, D. Motohashi, F. Mottez, Y. Moudden, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, M. Munari, C. Mundell, J. Mundet, H. Muraishi, K. Murase, A. Muronga, A. Murphy, N. Nagar, S. Nagataki, T. Nagayoshi, B. K. Nagesh, T. Naito, D. Nakajima, D. Nakajima, T. Nakamori, K. Nakayama, J. Nanni, D. Naumann, P. Nayman, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T. T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, F. Niederwanger, A. Niedźwiecki, J. Niemiec, D. Nieto, M. Nievas-Rosillo, A. Nikolaidis, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, K. Noda, L. Nogues, S. Nolan, R. Northrop, D. Nosek, M. Nöthe, B. Novosyadlyj, L. Nozka, F. Nunio, L. Oakes, P. O'Brien, C. Ocampo, G. Occhipinti, J. P. Ochoa, A. OFaolain de Bhroithe, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, H. Ohoka, N. Okazaki, A. Okumura, J.-F. Olive, D. Olszowski, R. A. Ong, S. Ono, M. Orienti, R. Orito, A. Orlati, J. Osborne, M. Ostrowski, D. Ottaway, N. Otte, S. Öttl, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, I. Pagano, S. Paiano, A. Paizis, J. Palacio, M. Palatka, J. Pallotta, K. Panagiotidis, J.-L. Panazol, D. Paneque, M. Panter, M. R. Panzera, R. Paoletti, M. Paolillo, A. Papayannis, G. Papyan, A. Paravac, J. M. Paredes, G. Pareschi, N. Park, D. Parsons, P. Paśko, S. Pavy, M. Pech, A. Peck, G. Pedaletti, A. Pe'er, S. Peet, D. Pelat, A. Pepato, M. d. C. Perez, L. Perri, M. Perri, M. Persic, M. Persic, A. Petrashyk, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Pfeiffer, G. Piano, D. Pieloth, E. Pierre, F. Pinto de Pinho, C. Pio García, Y. Piret, A. Pisarski, S. Pita, Ł. Platos, R. Platzer, S. Podkladkin, L. Pogosyan, M. Pohl, P. Poinsignon, A. Pollo, A. Porcelli, J. Porthault, W. Potter, S. Poulios, J. Poutanen, E. Prandini, E. Prandini, J. Prast, K. Pressard, G. Principe, F. Profeti, D. Prokhorov, H. Prokoph, M. Prouza, R. Pruchniewicz, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, R. Pyzioł, F. Queiroz, E. J. Quel, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P. J. Rajda, M. Rameez, R. Rando, R. C. Rannot, M. Rataj, T. Ravel, S. Razzaque, P. Reardon, I. Reichardt, O. Reimann, A. Reimer, O. Reimer, A. Reisenegger, M. Renaud, S. Renner, T. Reposeur, B. Reville, A. Rezaeian, W. Rhode, D. Ribeiro, R. Ribeiro Prado, M. Ribó, G. Richards, M. G. Richer, T. Richtler, J. Rico, J. Ridky, F. Rieger, M. Riquelme, P. R. Ristori, S. Rivoire, V. Rizi, E. Roache, J. Rodriguez, G. Rodriguez Fernandez, J. J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, J. Rose, S. Rosen, S. Rosier Lees, D. Ross, G. Rouaix, J. Rousselle, A. C. Rovero, G. Rowell, F. Roy, S. Royer, A. Rubini, B. Rudak, A. Rugliancich, W. Rujopakarn, C. Rulten, M. Rupiński, F. Russo, F. Russo, K. Rutkowski, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. O. Saemann, S. Safi-Harb, A. Saggion, V. Sahakian, T. Saito, N. Sakaki, S. Sakurai, A. Salamon, M. Salega, D. Salek, F. Salesa Greus, J. Salgado, G. Salina, L. Salinas, A. Salini, D. Sanchez, M. Sanchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E. M. Santos, R. Santos-Lima, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, K. Satalecka, Y. Sato, R. Savalle, M. Sawada, F. Sayède, S. Schanne, T. Schanz, E. J. Schioppa, S. Schlenstedt, J. Schmid, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schubert, E.-M. Schullian, J. Schultze, A. Schulz, S. Schulz, K. Schure, F. Schussler, T. Schwab, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, U. Schwendicke, C. Schwerdt, E. Sciacca, S. Scuderi, A. Segreto, J.-H. Seiradakis, G. H. Sembroski, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, N. Shafi, A. Shalchi, M. Sharma, M. Shayduk, R. C. Shellard, T. Shibata, A. Shigenaka, I. Shilon, E. Shum, L. Sidoli, M. Sidz, J. Sieiro, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, H. Simpson, B. B. Singh, A. Sinha, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, V. Sliusar, A. Smith, D. Sobczyńska, H. Sol, G. Sottile, M. Sowiński, F. Spanier, G. Spengler, R. Spiga, R. Stadler, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, N. Stergioulas, R. Sternberger, M. Sterzel, B. Stevenson, F. Stinzing, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, L. Stringhetti, M. Strzys, R. Stuik, K.-H. Sulanke, T. Suomijärvi, A. D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, P. Szwarnog, A. Tacchini, K. Tachihara, G. Tagliaferri, H. Tajima, H. Takahashi, K. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, G. Talbot, T. Tam, M. Tanaka, S. Tanaka, T. Tanaka, Y. Tanaka, C. Tanci, S. Tanigawa, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, A. Taylor, L. A. Tejedor, I. Telezhinsky, F. Temme, P. Temnikov, C. Tenzer, Y. Terada, J. C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, M. Teshima, V. Testa, D. Tezier, J. Thayer, J. Thornhill, S. Thoudam, D. Thuermann, L. Tibaldo, A. Tiengo, M. C. Timpanaro, D. Tiziani, M. Tluczykont, C. J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, Y. Tomono, A. Tonachini, D. Tonev, K. Torii, M. Tornikoski, D. F. Torres, M. Torres, E. Torresi, G. Toso, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, T. Toyama, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Trzeciak, K. Tsinganos, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, Y. Umetsu, S. S. Upadhya, M. Uslenghi, V. Vagelli, F. Vagnetti, J. Valdes-Galicia, M. Valentino, P. Vallania, L. Valore, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G. Vasileiadis, V. Vassiliev, J. R. Vázquez, M. L. Vázquez Acosta, M. Vecchi, A. Vega, I. Vegas, P. Veitch, P. Venault, L. Venema, C. Venter, S. Vercellone, S. Vergani, K. Verma, V. Verzi, G. P. Vettolani, C. Veyssiere, A. Viana, N. Viaux, J. Vicha, C. Vigorito, P. Vincent, S. Vincent, J. Vink, V. Vittorini, N. Vlahakis, L. Vlahos, H. Voelk, V. Voisin, A. Vollhardt, A. Volpicelli, H. von Brand, S. Vorobiov, I. Vovk, M. Vrastil, L. V. Vu, T. Vuillaume, R. Wagner, R. Wagner, S. J. Wagner, S. P. Wakely, T. Walstra, R. Walter, T. Walther, J. E. Ward, M. Ward, K. Warda, D. Warren, S. Wassberg, J. J. Watson, P. Wawer, R. Wawrzaszek, N. Webb, P. Wegner, O. Weiner, A. Weinstein, R. Wells, F. Werner, H. Wetteskind, M. White, R. White, M. Więcek, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, N. Wild, A. Wilhelm, M. Wilkinson, M. Will, M. Will, D. A. Williams, J. T. Williams, R. Willingale, N. Wilson, M. Winde, K. Winiarski, H. Winkler, M. Winter, R. Wischnewski, E. Witt, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, E. Wu, T. Wu, K. K. Yadav, H. Yamamoto, T. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, A. Yoshida, M. Yoshida, T. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, V. Zabalza, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanmar Sanchez, D. Zaric, D. Zavrtanik, M. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, A. Zhao, V. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn, P. Żychowski
Oct. 17, 2016 astro-ph.HE
List of contributions from the Cherenkov Telescope Array (CTA) Consortium presented at the 6th International Symposium on High-Energy Gamma-Ray Astronomy (Gamma 2016), July 11-15, 2016, in Heidelberg, Germany.
Search for VHE gamma-ray emission from Geminga pulsar and nebula with the MAGIC telescopes (1603.00730)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Ona Wilhelmi, F. Di Pierro, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, V. Fallah Ramazani, A. Fernandez-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczy, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. Gonzalez Munoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogues, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, I. Reichardt, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzic, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin
March 5, 2016 astro-ph.HE
The Geminga pulsar, one of the brighest gamma-ray sources, is a promising candidate for emission of very-high-energy (VHE > 100 GeV) pulsed gamma rays. Also, detection of a large nebula have been claimed by water Cherenkov instruments. We performed deep observations of Geminga with the MAGIC telescopes, yielding 63 hours of good-quality data, and searched for emission from the pulsar and pulsar wind nebula. We did not find any significant detection, and derived 95% confidence level upper limits. The resulting upper limits of 5.3 x 10^{-13} TeV cm^{-2} s^{-1} for the Geminga pulsar and 3.5 x 10^{-12} TeV cm^{-2} s^{-1} for the surrounding nebula at 50 GeV are the most constraining ones obtained so far at VHE. To complement the VHE observations, we also analyzed 5 years of Fermi-LAT data from Geminga, finding that the sub-exponential cut-off is preferred over the exponential cut-off that has been typically used in the literature. We also find that, above 10 GeV, the gamma-ray spectra from Geminga can be described with a power law with index softer than 5. The extrapolation of the power-law Fermi-LAT pulsed spectra to VHE goes well below the MAGIC upper limits, indicating that the detection of pulsed emission from Geminga with the current generation of Cherenkov telescopes is very difficult.
CTA Contributions to the 34th International Cosmic Ray Conference (ICRC2015) (1508.05894)
The CTA Consortium: A. Abchiche, U. Abeysekara, Ó. Abril, F. Acero, B. S. Acharya, M. Actis, G. Agnetta, J. A. Aguilar, F. Aharonian, A. Akhperjanian, A. Albert, M. Alcubierre, R. Alfaro, E. Aliu, A. J. Allafort, D. Allan, I. Allekotte, R. Aloisio, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E. O. Angüner, E. Antolini, L. A. Antonelli, M. Antonucci, V. Antonuccio, P. Antoranz, C. Aramo, A. Aravantinos, A. Argan, T. Armstrong, H. Arnaldi, L. Arnold, L. Arrabito, M. Arrieta, M. Arrieta, K. Asano, H. G. Asorey, T. Aune, C. B. Singh, A. Babic, M. Backes, A. Bais, S. Bajtlik, C. Balazs, M. Balbo, D. Balis, C. Balkowski, O. Ballester, J. Ballet, A. Balzer, A. Bamba, R. Bandiera, A. Barber, C. Barbier, M. Barceló, A. Barnacka, U. Barres de Almeida, J. A. Barrio, S. Basso, D. Bastieri, C. Bauer, A. Baushev, U. Becciani, Y. Becherini, J. Becker Tjus, V. Beckmann, W. Bednarek, W. Benbow, D. Benedico Ventura, J. Berdugo, D. Berge, E. Bernardini, S. Bernhard, K. Bernlöhr, B. Bertucci, M.-A. Besel, N. Bhatt, P. Bhattacharjee, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, S. Billotta, W. Bilnik, B. Biondo, T. Bird, E. Birsin, E. Bissaldi, J. Biteau, M. Bitossi, O. Blanch Bigas, P. Blasi, C. Boehm, L. Bogacz, M. Bogdan, M. Bohacova, C. Boisson, J. Boix Gargallo, J. Bolmont, G. Bonanno, A. Bonardi, P. Bonifacio, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, A. Bottani, M. Böttcher, J.-J. Bousquet, C. Boutonnet, F. Bouyjou, C. Braiding, L. Brandt, S. Brau-Nogué, J. Bregeon, T. Bretz, M. Briggs, M. Brigida, T. Bringmann, W. Brisken, E. Brocato, P. Brook, A. M. Brown, P. Brun, G. Brunetti, L. Brunetti, P. Bruno, M. Bryan, T. Buanes, N. Bucciantini, G. Buchholtz, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, R. Cameron, J. Camprecios, F. Canelli, R. Canestrari, S. Cantu, M. Capalbi, M. Capasso, G. Capobianco, P. Caraveo, J. Cardenzana, S. Carius, C. Carlile, E. Carmona, A. Carosi, R. Carosi, J. Carr, M. Carroll, J. Carter, P.-H. Carton, R. Caruso, J.-M. Casandjian, S. Casanova, E. Cascone, M. Casiraghi, A. Castellina, O. Catalano, S. Catalanotti, S. Cavazzani, S. Cazaux, M. Cefalà, P. Cerchiara, M. Cereda, M. Cerruti, E. Chabanne, P. Chadwick, C. Champion, S. Chaty, R. Chaves, P. Cheimets, A. Chen, X. Chen, M. Chernyakova, L. Chiappetti, M. Chikawa, D. Chinn, V. R. Chitnis, N. Cho, A. Christov, J. Chudoba, M. Cieślar, A. Cillis, M. A. Ciocci, R. Clay, J. Cohen-Tanugi, S. Colafrancesco, P. Colin, E. Colombo, J. Colome, S. Colonges, M. Compin, V. Conforti, V. Connaughton, S. Connell, J. Conrad, J. L. Contreras, P. Coppi, S. Corbel, J. Coridian, P. Corona, D. Corti, J. Cortina, L. Cossio, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, G. Crimi, S. J. Criswell, R. Crocker, J. Croston, G. Cusumano, P. Da Vela, Ø. Dale, F. D'Ammando, D. Dang, M. Daniel, I. Davids, B. Dawson, F. Dazzi, B. de Aguiar Costa, A. De Angelis, R. F. de Araujo Cardoso, V. De Caprio, G. De Cesare, A. De Franco, F. De Frondat, E. M. de Gouveia Dal Pino, I. de la Calle, G. A. De La Vega, R. de los Reyes Lopez, B. De Lotto, A. De Luca, J. R. T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, V. de Souza, G. Decock, C. Deil, M. Del Santo, E. Delagnes, G. Deleglise, C. Delgado, D. della Volpe, P. Deloye, G. Depaola, M. Detournay, A. Dettlaff, T. Di Girolamo, C. Di Giulio, A. Di Paola, F. Di Pierro, G. Di Sciascio, C. Díaz, J. Dick, H. Dickinson, S. Diebold, V. Diez, S. Digel, J. Dipold, G. Disset, A. Distefano, A. Djannati-Ataï, M. Doert, M. Dohmke, W. Domainko, N. Dominik, D. Dominis Prester, A. Donat, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, K. Doyle, G. Drake, D. Dravins, L. Drury, G. Dubus, D. Dumas, J. Dumm, D. Durand, D. D'Urso, V. Dwarkadas, J. Dyks, M. Dyrda, J. Ebr, J. C. Echaniz, E. Edy, K. Egberts, K. Egberts, P. Eger, S. Einecke, J. Eisch, F. Eisenkolb, C. Eleftheriadis, D. Elsässer, D. Emmanoulopoulos, C. Engelbrecht, D. Engelhaupt, J.-P. Ernenwein, M. Errando, S.Eschbach, A. Etchegoyen, P. Evans, M. Fairbairn, A. Falcone, D. Fantinel, K. Farakos, C. Farnier, E. Farrell, S. Farrell, G. Fasola, S. Fegan, F. Feinstein, D. Ferenc, A. Fernandez, M. Fernandez-Alonso, O. Ferreira, M. Fesquet, P. Fetfatzis, A. Fiasson, A. Filipčič, M. Filipovic, D. Fink, C. Finley, J. P. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, R. Firpo Curcoll, H. Fleischhack, H. Flores, D. Florin, C. Föhr, E. Fokitis, L. Font, G. Fontaine, B. Fontes, F. Forest, M. Fornasa, A. Förster, P. Fortin, L. Fortson, N. Fouque, A. Franckowiak, F. J. Franco, A. Frankowski, N. Frega, I. Freire Mota Albuquerque, L. Freixas Coromina, L. Fresnillo, C. Fruck, M. Fuessling, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, T. Fukuda, Y. Fukui, S. Funk, W. Gäbele, S. Gabici, A. Gadola, N. Galante, D. D. Gall, Y. Gallant, D. Galloway, S. Gallozzi, S. Gao, B. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, C. Gargano, F. Gargano, S. Garozzo, F. Garrecht, D. Garrido, L. Garrido, D. Gascon, J. Gaskins, J. Gaudemard, M. Gaug, J. Gaweda, N. Geffroy, L. Gérard, A. Ghalumyan, A. Ghedina, M. Ghigo, P. Ghislain, E. Giannakaki, F. Gianotti, S. Giarrusso, G. Giavitto, B. Giebels, N. Giglietto, V. Gika, R. Gimenes, M. Giomi, P. Giommi, F. Giordano, G. Giovannini, E. Giro, M. Giroletti, A. Giuliani, J.-F. Glicenstein, N. Godinovic, P. Goldoni, M. Gomez Berisso, G. A. Gomez Vargas, M. M. Gonzalez, A. González, F. González, A. González Muñoz, K. S. Gothe, D. Gotz, T. Grabarczyk, R. Graciani, P. Grandi, F. Grañena, J. Granot, G. Grasseau, R. Gredig, A. J. Green, A. M. Green, T. Greenshaw, I. Grenier, A. Grillo, M.-H. Grondin, J. Grube, M. Grudzinska, J. Grygorczuk, V. Guarino, D. Guberman, S. Gunji, G. Gyuk, D. Hadasch, A. Hagedorn, J. Hahn, N. Hakansson, N. Hamer Heras, Y. Hanabata, S. Hara, M. J. Hardcastle, J. Harris, T. Hassan, K. Hatanaka, T. Haubold, A. Haupt, T. Hayakawa, M. Hayashida, M. Heller, R. Heller, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, W. Hirai, K. Hirotani, D. Hoard, D. Hoffmann, W. Hofmann, P. Hofverberg, T. Holch, J. Holder, S. Hooper, D. Horan, J.R. Hörandel, S. Hormigos, D. Horns, J. Hose, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, M. Hütten, T. B. Humensky, J. Huovelin, J.-F. Huppert, M. Iacovacci, A. Ibarra, B. Idźkowski, D. Ikawa, J. M. Illa, D. Impiombato, S. Incorvaia, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, G. L. Israel, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, M. Janiak, F. Jankowsky, P. Jean, C. Jeanney, I. Jegouzo, P. Jenke, J. J. Jimenez, M. Jingo, M. Jingo, L. Jocou, T. Jogler, C.A. Johnson, L. Journet, C. Juffroy, I. Jung, P. E. Kaaret, M. Kagaya, J. Kakuwa, O. Kalekin, C. Kalkuhl, R. Kankanyan, A. Karastergiou, K. Kärcher, M. Karczewski, S. Karkar, P. Karn, J. Kasperek, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, S. Kaufmann, N. Kawanaka, T. Kawashima, D. Kazanas, N. Kelley-Hoskins, B. Kellner-Leidel, E. Kendziorra, J. Kersten, B. Khélifi, D. B. Kieda, T. Kihm, S. Kisaka, R. Kissmann, S. Klepser, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, F. Köck, J. Kocot, A. Kodakkadan, K. Kodani, K. Kohri, T. Kojima, K. Kokkotas, D. Kolitzus, N. Komin, I. Kominis, Y. Konno, K. Kosack, G. Koss, R. Koul, G. Kowal, S. Koyama, J. Kozioł, M. Kraus, J. Krause, M. Krause, H. Krawzcynski, F. Krennrich, A. Kretzschmann, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, J. Kushida, A. Kuznetsov, A. La Barbera, N. La Palombara, V. La Parola, G. La Rosa, H. Laffon, T. Lagadec, R. Lahmann, K. Lalik, G. Lamanna, D. Landriu, H. Landt, R. G. Lang, D. Languignon, J. Lapington, P. Laporte, N. Latovski, D. Law-Green, J.-P. Le Fèvre, T. Le Flour, P. Le Sidaner, S.-H. Lee, W. H. Lee, K. Leffhalm, H. Leich, M. A. Leigui de Oliveira, D. Lelas, A. Lemière, M. Lemoine-Goumard, J.-P. Lenain, R. Leonard, R. Leoni, L. Lessio, G. Leto, A. Leveque, B. Lieunard, M. Limon, R. Lindemann, E. Lindfors, A. Liolios, A. Lipniacka, H. Lockart, T. Lohse, D. Loiseau, E. Łokas, S. Lombardi, F. Longo, G. Longo, A. Lopatin, M. Lopez, R. López-Coto, A. López-Oramas, D. Loreggia, T. Louge, F. Louis, C.-C. Lu, F. Lucarelli, D. Lucchesi, H. Lüdecke, P. L. Luque-Escamilla, O. Luz, E. Lyard, M. C. Maccarone, T. J. Maccarone, E. Mach, G. M. Madejski, A. Madonna, M. Mahabir, G. Maier, P. Majumdar, M. Makariev, G. Malaguti, G. Malaspina, A. K. Mallot, S. Maltezos, A. Mancilla, D. Mandat, G. Maneva, P. Manigot, N. Mankushiyil, K. Mannheim, N. Maragos, D. Marano, P. Marchegiani, J. A. Marcomini, A. Marcowith, M. Mariotti, M. Marisaldi, S. Markoff, A. Marszałek, C. Martens, J. Martí, J.-M. Martin, P. Martin, G. Martínez, M. Martínez, O. Martínez, R. Marx, P. Massimino, A. Mastichiadis, S. Mastroianni, M. Mastropietro, S. Masuda, H. Matsumoto, S. Matsuoka, S. Mattiazzo, G. Maurin, N. Maxted, J. Maya, M. Mayer, D. Mazin, E. Mazureau, M. N. Mazziotta, L. Mc Comb, A. McCann, N. McCubbin, I. McHardy, R. McKay, K. McKinney, K. Meagher, C. Medina, F. Mehrez, C. Melioli, D. Melkumyan, D. Melo, T. Melse, S. Mereghetti, P. Mertsch, M. Meyer, J. L. Meyrelles jr, A. Miccichè, J. Michałowski, P. Micolon, P. Mientjes, S. Mignot, A. Mihailidis, T. Mineo, M. Minuti, N. Mirabal, F. Mirabel, J. M. Miranda, R. Mirzoyan, A. Mistò, A. Mitchell, T. Mizuno, R. Moderski, I. Mognet, M. Mohammed, R. Moharana, E. Molinari, E. Monmarthe, G. Monnier, T. Montaruli, C. Monte, I. Monteiro, P. Moore, A. Moralejo Olaizola, C. Morello, E. Moretti, K. Mori, G. Morlino, A. Morselli, F. Mottez, Y. Moudden, E. Moulin, I. Mrusek, S. Mueller, R. Mukherjee, P. Munar-Adrover, C. Mundell, H. Muraishi, K. Murase, A. Muronga, A. Murphy, S. Nagataki, T. Nagayoshi, B. K. Nagesh, T. Naito, D. Nakajima, T. Nakamori, K. Nakayama, D. Naumann, P. Nayman, L. Nellen, R. Nemmen, A. Neronov, V. Neustroev, N. Neyroud, T. Nguyen, L. Nicastro, J. Nicolau-Kukliński, F. Niederwanger, A. Niedźwiecki, J. Niemiec, D. Nieto, M. Nievas, A. Nikolaidis, K. Nishijima, K.-I. Nishikawa, K. Noda, L. Nogues, S. Nolan, R. Northrop, D. Nosek, L. Nozka, F. Nunio, L. Oakes, P. O'Brien, G. Occhipinti, A. O'Faolain de Bhroithe, M. Ogino, Y. Ohira, M. Ohishi, S. Ohm, H. Ohoka, A. Okumura, J.-F. Olive, D. Olszowski, R. A. Ong, S. Ono, M. Orienti, R. Orito, A. Orlati, A. Orlati, J. Osborne, M. Ostrowski, L. A. Otero, D. Ottaway, N. Otte, I. Oya, A. Ozieblo, M. Padovani, I. Pagano, S. Paiano, A. Paizis, J. Palacio, M. Palatka, J. Pallotta, K. Panagiotidis, J.-L. Panazol, D. Paneque, M. Panter, M. R. Panzera, R. Paoletti, M. Paolillo, A. Papayannis, G. Papyan, A. Paravac, J. M. Paredes, G. Pareschi, N. Park, D. Parsons, P. Paśko, S. Pavy, M. Paz Arribas, M. Pech, A. Peck, G. Pedaletti, S. Peet, V. Pelassa, D. Pelat, C. Peres, M. d. C. Perez, L. Perri, M. Persic, A. Petrashyk, P.-O. Petrucci, B. Peyaud, M. Pfeifer, G. Pfeiffer, G. Piano, A. Pichel, D. Pieloth, M. Pierbattista, E. Pierre, F. Pinto de Pinho, C. Pio García, Y. Piret, S. Pita, A. Planes, M. Platino, Ł. Platos, R. Platzer, S. Podkladkin, L. Pogosyan, M. Pohl, P. Poinsignon, J. D. Ponz, A. Porcelli, W. Potter, S. Poulios, J. Poutanen, E. Prandini, J. Prast, R. Preece, F. Profeti, D. Prokhorov, H. Prokoph, M. Prouza, M. Proyetti, R. Pruchniewicz, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, R. Pyzioł, F. Queiroz, E. J. Quel, J. Quinn, A. Quirrenbach, E. Racero, T. Räck, J. Rafalski, I. Rafighi, S. Rainò, P. J. Rajda, M. Rameez, R. Rando, R. C. Rannot, M. Rataj, S. Rateau, T. Ravel, D. Ravignani, S. Razzaque, P. Reardon, O. Reimann, A. Reimer, O. Reimer, K. Reitberger, M. Renaud, S. Renner, T. Reposeur, R. Rettig, B. Reville, W. Rhode, D. Ribeiro, M. Ribó, G. Richards, M. G. Richer, J. Rico, J. Ridky, F. Rieger, P. Ringegni, P. R. Ristori, A. Rivière, S. Rivoire, E. Roache, G. Rodeghiero, J. Rodriguez, G. Rodriguez Fernandez, J. J. Rodríguez Vázquez, T. Rogers, G. Rojas, P. Romano, M. P. Romay Rodriguez, G. Romeo, G. E. Romero, M. Roncadelli, J. Rose, S. Rosen, S. Rosier Lees, D. Ross, P. Rossiter, G. Rouaix, J. Rousselle, A. C. Rovero, G. Rowell, F. Roy, S. Royer, A. Różańska, B. Rudak, A. Rugliancich, C. Rulten, M. Rupiński, F. Russo, K. Rutkowski, O. Saavedra, S. Sabatini, B. Sacco, E. O. Saemann, A. Saggion, L. Saha, V. Sahakian, K. Saito, T. Saito, N. Sakaki, M. Salega, D. Salek, J. Salgado, A. Salini, D. Sanchez, F. Sanchez, M. Sanchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E. M. Santos, R. Santos-Lima, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, R. Savalle, M. Sawada, F. Sayède, J. Schafer, S. Schanne, T. Schanz, E. J. Schioppa, S. Schlenstedt, R. Schlickeiser, T. Schmidt, J. Schmoll, M. Schneider, P. Schovanek, A. Schubert, C. Schultz, J. Schultze, A. Schulz, S. Schulz, K. Schure, F. Schussler, T. Schwab, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, U. Schwendicke, C. Schwerdt, A. Segreto, J.-H. Seiradakis, G. H. Sembroski, D. Semikoz, N. Serre, M. Servillat, K. Seweryn, N. Shafi, M. Sharma, M. Shayduk, R. C. Shellard, T. Shibata, K. Shiningayamwe Pandeni, A. Shukla, E. Shum, L. Sidoli, M. Sidz, J. Sieiro, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, B. B. Singh, A. Sinha, G. Sironi, J. Sitarek, P. Sizun, V. Slyusar, A. Smith, J. Smith, D. Sobczyńska, H. Sol, G. Sottile, M. Sowiński, F. Spanier, G. Spengler, D. Spiga, R. Stadler, O. Stahl, V. Stamatescu, A. Stamerra, S. Stanič, R. Starling, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, N. Stergioulas, R. Sternberger, M. Sterzel, B. Stevenson, F. Stinzing, M. Stodulska, M. Stodulski, T. Stolarczyk, U. Straumann, E. Strazzeri, L. Stringhetti, M. Strzys, R. Stuik, K.-H. Sulanke, A. D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, P. Szwarnog, A. Tacchini, K. Tachihara, G. Tagliaferri, H. Tajima, H. Takahashi, K. Takahashi, M. Takahashi, L. Takalo, H. Takami, G. Talbot, J. Tammi, M. Tanaka, S. Tanaka, T. Tanaka, Y. Tanaka, C. Tanci, E. Tarantino, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L. A. Tejedor, I. Telezhinsky, F. Temme, P. Temnikov, C. Tenzer, Y. Terada, R. Terrier, D. Tescaro, M. Teshima, V. Testa, D. Tezier, J. Thayer, V. Thomas, J. Thornhill, D. Thuermann, L. Tibaldo, O. Tibolla, A. Tiengo, G. Tijsseling, M. C. Timpanaro, M. Tluczykont, C. J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, K. Toma, J. Tomastik, Y. Tomono, A. Tonachini, D. Tonev, K. Torii, M. Tornikoski, D. F. Torres, M. Torres, E. Torresi, S. Toscano, G. Toso, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, C. Townsley, T. Toyama, P. Travnicek, M. Trifoglio, I. Troyano Pujadas, I. Troyano Pujadas, M. Trzeciak, K. Tsinganos, Y. Tsubone, Y. Tsuchiya, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, Y. Umetsu, C. Underwood, S. S. Upadhya, M. Uslenghi, F. Vagnetti, J. Valdes-Galicia, P. Vallania, G. Vallejo, L. Valore, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G. Vasileiadis, V. Vassiliev, M. L. Vázquez Acosta, M. Vecchi, I. Vegas, P. Veitch, L. Venema, C. Venter, S. Vercellone, S. Vergani, K. Verma, V. Verzi, G. P. Vettolani, A. Viana, J. Vicha, M. Videla, C. Vigorito, P. Vincent, S. Vincent, J. Vink, V. Vittorini, N. Vlahakis, L. Vlahos, H. Voelk, P. Vogler, V. Voisin, A. Vollhardt, A. Volpicelli, S. Vorobiov, I. Vovk, L. V. Vu, R. Wagner, R. M. Wagner, R. G. Wagner, S. J. Wagner, S. P. Wakely, R. Walter, T. Walther, J. E. Ward, M. Ward, K. Warda, R. Warwick, S. Wassberg, J. Watson, P. Wawer, R. Wawrzaszek, N. Webb, P. Wegner, A. Weinstein, Q. Weitzel, R. Wells, F. Werner, M. Werner, H. Wetteskind, M. White, R. White, M. Więcek, A. Wierzcholska, S. Wiesand, R. Wijers, N. Wild, A. Wilhelm, M. Wilkinson, M. Will, D. A. Williams, J. T. Williams, R. Willingale, M. Winde, K. Winiarski, H. Winkler, R. Wischnewski, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, E. Wu, T. Wu, K. K. Yadav, H. Yamamoto, T. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, J. M. Yebras, D. Yelos, W. Yeung, A. Yoshida, T. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, V. Zabalza, V. Zabalza, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanin, R. Zanmar Sanchez, D. Zavrtanik, M. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, A. Zhao, A. Ziegler, J. Ziemann, K. Ziętara, J. Ziółkowski, V. Zitelli, A. Zoli, C. Zurbach, P. Żychowski
Sept. 11, 2015 astro-ph.HE
List of contributions from the CTA Consortium presented at the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands.
Discovery of very high energy gamma-ray emission from the blazar 1ES 0033+595 by the MAGIC telescopes (1410.7059)
J. Aleksić, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, K. Berger, E. Bernardini, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, D. Carreto Fidalgo, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, G. De Caneva, B. De Lotto, C. Delgado Mendez, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher, D. Elsaesser, E. Farina, D. Ferenc, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, N. Godinović, A. González Muñoz, S. R. Gozzini, D. Hadasch, M. Hayashida, J. Herrera, A. Herrero, D. Hildebrand, J. Hose, D. Hrupec, W. Idec, V. Kadenius, H. Kellermann, K. Kodani, Y. Konno, J. Krause, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, N. Lewandowska, E. Lindfors, S. Lombardi, M. López, R. López-Coto, A. López-Oramas, E. Lorenz, I. Lozano, M. Makariev, K. Mallot, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, M. Meucci, J. M. Miranda, R. Mirzoyan, A. Moralejo, P. Munar-Adrover, D. Nakajima, A. Niedzwiecki, K. Nilsson, K. Nishijima, K. Noda, N. Nowak, R. Orito, A. Overkemping, S. Paiano, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, S. Partini, M. Persic, F. Prada, P. G. Prada Moroni, E. Prandini, S. Preziuso, I. Puljak, R. Reinthal, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, S. Rügamer, A. Saggion, T. Saito, K. Saito, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, F. Spanier, V. Stamatescu, A. Stamerra, T. Steinbring, J. Storz, S. Sun, T. Surić, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, O. Tibolla, D. F. Torres, T. Toyama, A. Treves, M. Uellenbeck, P. Vogler, R. M. Wagner, F. Zandanel, R. Zanin (MAGIC collaboration), V. Tronconi, S. Buson
Oct. 26, 2014 astro-ph.CO, astro-ph.GA, astro-ph.HE
The number of known very high energy (VHE) blazars is $\sim\,50$, which is very small in comparison to the number of blazars detected in other frequencies. This situation is a handicap for population studies of blazars, which emit about half of their luminosity in the $\gamma$-ray domain. Moreover, VHE blazars, if distant, allow for the study of the environment that the high-energy $\gamma$-rays traverse in their path towards the Earth, like the extragalactic background light (EBL) and the intergalactic magnetic field (IGMF), and hence they have a special interest for the astrophysics community. We present the first VHE detection of 1ES\,0033+595 with a statistical significance of 5.5\,$\sigma$. The VHE emission of this object is constant throughout the MAGIC observations (2009 August and October), and can be parameterized with a power law with an integral flux above 150 GeV of $(7.1\pm1.3)\times 10^{-12} {\mathrm{ph\,cm^{-2}\,s^{-1}}}$ and a photon index of ($3.8\pm0.7$). We model its spectral energy distribution (SED) as the result of inverse Compton scattering of synchrotron photons. For the study of the SED we used simultaneous optical R-band data from the KVA telescope, archival X-ray data by \textit{Swift} as well as \textit{INTEGRAL}, and simultaneous high energy (HE, $300$\,MeV~--~$10$\,GeV) $\gamma$-ray data from the \textit{Fermi} LAT observatory. Using the empirical approach of Prandini et al. (2010) and the \textit{Fermi}-LAT and MAGIC spectra for this object, we estimate the redshift of this source to be $0.34\pm0.08\pm0.05$. This is a relevant result because this source is possibly one of the ten most distant VHE blazars known to date, and with further (simultaneous) observations could play an important role in blazar population studies, as well as future constraints on the EBL and IGMF.
First broadband characterization and redshift determination of the VHE blazar MAGIC J2001+439 (1409.3389)
MAGIC Collaboration, L. A. Antonelli, U. Barres de Almeida, E. Bernardini, G. Bonnoli, A. Carosi, J. L. Contreras, A. De Angelis, M. Doert, M. Doro, E. Farina, K. Frantzen, D. Garrido Terrats, A. González Muñoz, A. Herrero, V. Kadenius, J. Krause, N. Lewandowska, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, J. M. Miranda, D. Nakajima, N. Nowak, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, R. Reinthal, J. Rodriguez Garcia, K. Satalecka, T. Schweizer, I. Snidaric, A. Stamerra, T. Surić, P. Temnikov, O. Tibolla, M. Uellenbeck, R. Zanin, W. Max-Moerbeck, J. L. Richards, L. C. Reyes Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, , E-18080 Granada, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, , Lausanne, Switzerland, now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, INAF Istituto di Radioastronomia, 40129 Bologna, Italy, Dipartimento di Fisica e Astronomia, via Ranzani 1, 40127 Bologna, Italy, Cahill Center for Astronomy, Astrophysics, California Institute of Technology, 1200 E California Blvd, Pasadena, CA 91125, Isaac Newton Institute of Chile, St. Petersburg Branch, St. Petersburg, Russia, Pulkovo Observatory, 196140 St. Petersburg, Russia, Astronomical Institute, St. Petersburg State University, St. Petersburg, Russia, National Radio Astronomy Observatory, PO Box 0, Socorro, NM 87801, Science Data Center, I-00133 Roma, Italy, Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, Department of Physics, Mathematics, College of Science, Engineering, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi Kanagawa 252-5258, Japan, University of Missouri-St. Louis, St. Louis, Missouri, USA, Physics Department, California Polytechnic State University, San Luis Obispo, CA 94307, USA, Deceased )
We aim to characterize the broadband emission from 2FGL J2001.1+4352, which has been associated with the unknown-redshift blazar MG4 J200112+4352. Based on its gamma-ray spectral properties, it was identified as a potential very high energy (VHE; E > 100 GeV) gamma-ray emitter. The source was observed with MAGIC first in 2009 and later in 2010 within a multi-instrument observation campaign. The MAGIC observations yielded 14.8 hours of good quality stereoscopic data. The object was monitored at radio, optical and gamma-ray energies during the years 2010 and 2011. The source, named MAGIC J2001+439, is detected for the first time at VHE with MAGIC at a statistical significance of 6.3 {\sigma} (E > 70 GeV) during a 1.3-hour long observation on 2010 July 16. The multi-instrument observations show variability in all energy bands with the highest amplitude of variability in the X-ray and VHE bands. We also organized deep imaging optical observations with the Nordic Optical Telescope in 2013 to determine the source redshift. We determine for the first time the redshift of this BL Lac object through the measurement of its host galaxy during low blazar activity. Using the observational evidence that the luminosities of BL Lac host galaxies are confined to a relatively narrow range, we obtain z = 0.18 +/- 0.04. Additionally, we use the Fermi-LAT and MAGIC gamma-ray spectra to provide an independent redshift estimation, z = 0.17 +/- 0.10. Using the former (more accurate) redshift value, we adequately describe the broadband emission with a one-zone SSC model for different activity states and interpret the few-day timescale variability as produced by changes in the high-energy component of the electron energy distribution.
MAGIC observations and multifrequency properties of the Flat Spectrum Radio Quasar 3C 279 in 2011 (1311.2833)
J. Aleksić, P. Antoranz, J. A. Barrio, E. Bernardini, G. Bonnoli, A. Carosi, J. L. Contreras, A. De Angelis, M. Doert, M. Doro, E. Farina, K. Frantzen, D. Garrido Terrats, A. González Muñoz, D. Hildebrand, V. Kadenius, J. Krause, D. Lelas, M. López, I. Lozano, K. Mannheim, M. Martínez, J. M. Miranda, D. Nakajima, N. Nowak, D. Paneque, S. Partini, E. Prandini, W. Rhode, S. Rügamer, K. Satalecka, T. Schweizer, I. Snidaric, A. Stamerra, L. Takalo, T. Terzić, O. Tibolla, R. M. Wagner, T. Vornanen, M. Tornikoski, W. Max-Moerbeck, J. Richards (34, for the Owens Valley Radio Observatory), M. Hayashida, D. A. Sanchez (37, on behalf of the Fermi-LAT), A. Marscher IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, , D-15738 Zeuthen, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Universitat de Barcelona Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, Department of Physics, Astronomy, University of Turku, Finland, Aalto University Metsähovi Radio Observatory, Metsähovintie 114, 02540, Kylmälä, Finland, Cahill Center for Astronomy & Astrophysics, Caltech, 1200 E. California Blvd, Pasadena, CA, 91125, U.S.A., Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba, 277-8582, Japan, KIPAC, SLAC National Accelerator Laboratory, Stanford, CA, 94025, U.S.A., Laboratoire d'Annecy-le-Vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, F-74941 Annecy-le-Vieux, France, Institute for Astrophysical Research, Boston University, U.S.A.)
July 7, 2014 astro-ph.GA, astro-ph.HE
We study the multifrequency emission and spectral properties of the quasar 3C 279. We observed 3C 279 in very high energy (VHE, E>100GeV) gamma rays, with the MAGIC telescopes during 2011, for the first time in stereoscopic mode. We combine these measurements with observations at other energy bands: in high energy (HE, E>100MeV) gamma rays from Fermi-LAT, in X-rays from RXTE, in the optical from the KVA telescope and in the radio at 43GHz, 37GHz and 15GHz from the VLBA, Mets\"ahovi and OVRO radio telescopes and optical polarisation measurements from the KVA and Liverpool telescopes. During the MAGIC observations (February to April 2011) 3C 279 was in a low state in optical, X-ray and gamma rays. The MAGIC observations did not yield a significant detection. These upper limits are in agreement with the extrapolation of the HE gamma-ray spectrum, corrected for extragalactic background light absorption, from Fermi-LAT. The second part of the MAGIC observations in 2011 was triggered by a high activity state in the optical and gamma-ray bands. During the optical outburst the optical electric vector position angle rotatated of about 180 degrees. There was no simultaneous rotation of the 43GHz radio polarisation angle. No VHE gamma rays were detected by MAGIC, and the derived upper limits suggest the presence of a spectral break or curvature between the Fermi-LAT and MAGIC bands. The combined upper limits are the strongest derived to date for the source at VHE and below the level of the previously detected flux by a factor 2. Radiation models that include synchrotron and inverse Compton emissions match the optical to gamma-ray data, assuming an emission component inside the broad line region (BLR) responsible for the high-energy emission and one outside the BLR and the infrared torus causing optical and low-energy emission. We interpreted the optical polarisation with a bent trajectory model.
MAGIC long-term study of the distant TeV blazar PKS 1424+240 in a multiwavelength context (1401.0464)
MAGIC Collaboration, L. A. Antonelli, U. Barres de Almeida, E. Bernardini, G. Bonnoli, A. Carosi, J. L. Contreras, A. De Angelis, M. Doert, M. Doro, E. Farina, K. Frantzen, D. Garrido Terrats, A. González Muñoz, A. Herrero, V. Kadenius, J. Krause, N. Lewandowska, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, J. M. Miranda, D. Nakajima, N. Nowak, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, R. Reinthal, J. Rodriguez Garcia, K. Satalecka, T. Schweizer, I. Snidaric, A. Stamerra, L. Takalo, T. Terzić, O. Tibolla, P. Vogler, S. Cutini, T. Kangas, A. Lähteenmäki, J. Richards Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università di Padova, INFN, I-35131 Padova, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Università di Pisa, , INFN Pisa, I-56126 Pisa, Italy, now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, now at Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland, now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, , Turku, Finland, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, Kavli Institute for Particle Astrophysics, Cosmology, SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305, USA, Cahill Center for Astronomy & Astrophysics, California Institute of Technology, 1200 E California Blvd, Pasadena, CA 91125, USA, Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, USA, Aalto University Metsähovi Radio Observatory, Metsähovintie 114, FIN-02540 Kylmälä, Finland, Aalto University, Department of Radio Science, Engineering, Espoo, Finland, Department of Physics, University of Crete, Greece)
June 11, 2014 astro-ph.CO, astro-ph.HE
We present a study of the very high energy (VHE; E > 100 GeV) gamma-ray emission of the blazar PKS 1424+240 observed with the MAGIC telescopes. The primary aim of this paper is the multiwavelength spectral characterization and modeling of this blazar, which is made particularly interesting by the recent discovery of a lower limit of its redshift of z > 0.6 and makes it a promising candidate to be the most distant VHE source. The source has been observed with the MAGIC telescopes in VHE gamma rays for a total observation time of ~33.6 h from 2009 to 2011. The source was marginally detected in VHE gamma rays during 2009 and 2010, and later, the detection was confirmed during an optical outburst in 2011. The combined significance of the stacked sample is ~7.2 sigma. The differential spectra measured during the different campaigns can be described by steep power laws with the indices ranging from 3.5 +/- 1.2 to 5.0 +/- 1.7. The MAGIC spectra corrected for the absorption due to the extragalactic background light connect smoothly, within systematic errors, with the mean spectrum in 2009-2011 observed at lower energies by the Fermi-LAT. The absorption-corrected MAGIC spectrum is flat with no apparent turn down up to 400 GeV. The multiwavelength light curve shows increasing flux in radio and optical bands that could point to a common origin from the same region of the jet. The large separation between the two peaks of the constructed non-simultaneous spectral energy distribution also requires an extremely high Doppler factor if an one zone synchrotron self-Compton model is applied. We find that a two-component synchrotron self-Compton model describes the spectral energy distribution of the source well, if the source is located at z~0.6.
Detection of bridge emission above 50 GeV from the Crab pulsar with the MAGIC telescopes (1402.4219)
J. Aleksić, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, G. De Caneva, B. De Lotto, C. Delgado Mendez, M. Doert, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher, D. Elsaesser, E. Farina, D. Ferenc, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, N. Godinović, A. González Muñoz, S. R. Gozzini, D. Hadasch, M. Hayashida, J. Herrera, A. Herrero, D. Hildebrand, K. Hirotani, J. Hose, D. Hrupec, W. Idec, V. Kadenius, H. Kellermann, K. Kodani, Y. Konno, J. Krause, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, N. Lewandowska, E. Lindfors, S. Lombardi, M. López, R. López-Coto, A. López-Oramas, E. Lorenz, I. Lozano, M. Makariev, K. Mallot, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, P. Munar-Adrover, D. Nakajima, A. Niedzwiecki, K. Nilsson, K. Nishijima, K. Noda, N. Nowak, R. Orito, A. Overkemping, S. Paiano, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, S. Partini, M. Persic, P. G. Prada Moroni, E. Prandini, S. Preziuso, I. Puljak, R. Reinthal, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, S. Rügamer, A. Saggion, T.Y. Saito, K. Saito, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, F. Spanier, V. Stamatescu, A. Stamerra, T. Steinbring, J. Storz, M. Strzys, S. Sun, T. Surić, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, O. Tibolla, D. F. Torres, T. Toyama, A. Treves, M. Uellenbeck, P. Vogler, R. M. Wagner, R. Zanin
May 2, 2014 astro-ph.HE
The Crab pulsar is the only astronomical pulsed source detected at very high energy (VHE, E>100GeV) gamma-rays. The emission mechanism of VHE pulsation is not yet fully understood, although several theoretical models have been proposed. In order to test the new models, we measured the light curve and the spectra of the Crab pulsar with high precision by means of deep observations. We analyzed 135 hours of selected MAGIC data taken between 2009 and 2013 in stereoscopic mode. In order to discuss the spectral shape in connection with lower energies, 4.6 years of {\it Fermi}-LAT data were also analyzed. The known two pulses per period were detected with a significance of $8.0 \sigma$ and $12.6 \sigma$. In addition, significant emission was found between the two pulses with $6.2 \sigma$. We discovered the bridge emission above 50 GeV between the two main pulses. This emission can not be explained with the existing theories. These data can be used for testing new theoretical models.
Search for gamma-ray-emitting active galactic nuclei in the Fermi-LAT unassociated sample using machine learning (1312.5726)
M. Doert, M. Errando
Dec. 19, 2013 physics.data-an, astro-ph.IM, astro-ph.HE
The second Fermi-LAT source catalog (2FGL) is the deepest all-sky survey available in the gamma-ray band. It contains 1873 sources, of which 576 remain unassociated. Machine-learning algorithms can be trained on the gamma-ray properties of known active galactic nuclei (AGN) to find objects with AGN-like properties in the unassociated sample. This analysis finds 231 high-confidence AGN candidates, with increased robustness provided by intersecting two complementary algorithms. A method to estimate the performance of the classification algorithm is also presented, that takes into account the differences between associated and unassociated gamma-ray sources. Follow-up observations targeting AGN candidates, or studies of multiwavelength archival data, will reduce the number of unassociated gamma-ray sources and contribute to a more complete characterization of the population of gamma-ray emitting AGN.
CTA contributions to the 33rd International Cosmic Ray Conference (ICRC2013) (1307.2232)
The CTA Consortium: O. Abril, B.S. Acharya, M. Actis, G. Agnetta, J.A. Aguilar, F. Aharonian, M. Ajello, A. Akhperjanian, M. Alcubierre, J. Aleksic, R. Alfaro, E. Aliu, A.J. Allafort, D. Allan, I. Allekotte, R. Aloisio, E. Amato, G. Ambrosi, M. Ambrosio, J. Anderson, E.O. Angüner, L.A. Antonelli, V. Antonuccio, M. Antonucci, P. Antoranz, A. Aravantinos, A. Argan, T. Arlen, C. Aramo, T. Armstrong, H. Arnaldi, L. Arrabito, K. Asano, T. Ashton, H. G. Asorey, T. Aune, Y. Awane, H. Baba, A. Babic, N. Baby, J. Bähr, A. Bais, C. Baixeras, S. Bajtlik, M. Balbo, D. Balis, C. Balkowski, J. Ballet, A. Bamba, R. Bandiera, A. Barber, C. Barbier, M. Barceló, A. Barnacka, J. Barnstedt, U. Barres de Almeida, J.A. Barrio, A. Basili, S. Basso, D. Bastieri, C. Bauer, A. Baushev, U. Becciani, J. Becerra, J. Becerra, Y. Becherini, K.C. Bechtol, J. Becker Tjus, V. Beckmann, W. Bednarek, B. Behera, M. Belluso, W. Benbow, J. Berdugo, D. Berge, K. Berger, F. Bernard, T. Bernardino, K. Bernlöhr, B. Bertucci, N. Bhat, S. Bhattacharyya, B. Biasuzzi, C. Bigongiari, A. Biland, S. Billotta, T. Bird, E. Birsin, E. Bissaldi, J. Biteau, M. Bitossi, S. Blake, O. Blanch Bigas, P. Blasi, A. Bobkov, V. Boccone, M. Böttcher, L. Bogacz, J. Bogart, M. Bogdan, C. Boisson, J. Boix Gargallo, J. Bolmont, G. Bonanno, A. Bonardi, T. Bonev, P. Bonifacio, G. Bonnoli, P. Bordas, A. Borgland, J. Borkowski, R. Bose, O. Botner, A. Bottani, L. Bouchet, M. Bourgeat, C. Boutonnet, A. Bouvier, S. Brau-Nogué, I. Braun, T. Bretz, M. Briggs, M. Brigida, T. Bringmann, R. Britto, P. Brook, P. Brun, L. Brunetti, P. Bruno, N. Bucciantini, T. Buanes, J. Buckley, R. Bühler, V. Bugaev, A. Bulgarelli, T. Bulik, G. Busetto, S. Buson, K. Byrum, M. Cailles, R. Cameron, J. Camprecios, R. Canestrari, S. Cantu, M. Capalbi, P. Caraveo, E. Carmona, A. Carosi, R. Carosi, J. Carr, J. Carter, P.-H. Carton, R. Caruso, S. Casanova, E. Cascone, M. Casiraghi, A. Castellina, O. Catalano, S. Cavazzani, S. Cazaux, P. Cerchiara, M. Cerruti, E. Chabanne, P. Chadwick, C. Champion, R. Chaves, P. Cheimets, A. Chen, J. Chiang, L. Chiappetti, M. Chikawa, V.R. Chitnis, F. Chollet, A. Christof, J. Chudoba, M. Cieślar, A. Cillis, M. Cilmo, A. Codino, J. Cohen-Tanugi, S. Colafrancesco, P. Colin, J. Colome, S. Colonges, M. Compin, P. Conconi, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, P. Coppi, J. Coridian, P. Corona, D. Corti, J. Cortina, L. Cossio, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Couturier, S. Covino, G. Crimi, S.J. Criswell, J. Croston, G. Cusumano, M. Dafonseca, O. Dale, M. Daniel, J. Darling, I. Davids, F. Dazzi, A. de Angelis, V. De Caprio, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, G.A. De La Vega, R. de los Reyes Lopez, B. de Lotto, A. De Luca, M. de Naurois, Y. de Oliveira, E. de Oña Wilhelmi, F. de Palma, V. de Souza, G. Decerprit, G. Decock, C. Deil, E. Delagnes, G. Deleglise, C. Delgado, D. della Volpe, P. Demange, G. Depaola, A. Dettlaff, T. Di Girolamo, C. Di Giulio, A. Di Paola, F. Di Pierro, G. di Sciascio, C. Díaz, J. Dick, R. Dickherber, H. Dickinson, V. Diez-Blanco, S. Digel, D. Dimitrov, G. Disset, A. Djannati-Ataï, M. Doert, M. Dohmke, W. Domainko, D. Dominis Prester, A. Donat, D. Dorner, M. Doro, J.-L. Dournaux, G. Drake, D. Dravins, L. Drury, F. Dubois, R. Dubois, G. Dubus, C. Dufour, D. Dumas, J. Dumm, D. Durand, V. Dwarkadas, J. Dyks, M. Dyrda, J. Ebr, E. Edy, K. Egberts, P. Eger, S. Einecke, C. Eleftheriadis, S. Elles, D. Emmanoulopoulos, D. Engelhaupt, R. Enomoto, J.-P. Ernenwein, M. Errando, A. Etchegoyen, P.A. Evans, A. Falcone, A. Faltenbacher, D. Fantinel, K. Farakos, C. Farnier, E. Farrell, G. Fasola, B.W. Favill, E. Fede, S. Federici, S. Fegan, F. Feinstein, D. Ferenc, P. Ferrando, M. Fesquet, P. Fetfatzis, A. Fiasson, E. Fillin-Martino, D. Fink, C. Finley, J. P. Finley, M. Fiorini, R. Firpo Curcoll, E. Flandrini, H. Fleischhack, H. Flores, D. Florin, W. Focke, C. Föhr, E. Fokitis, L. Font, G. Fontaine, M. Fornasa, A. Förster, L. Fortson, N. Fouque, A. Franckowiak, F.J. Franco, A. Frankowski, C. Fransson, G.W. Fraser, R. Frei, L. Fresnillo, C. Fruck, D. Fugazza, Y. Fujita, Y. Fukazawa, Y. Fukui, S. Funk, W. Gäbele, S. Gabici, R. Gabriele, A. Gadola, N. Galante, D. Gall, Y. Gallant, J. Gámez-García, M. Garczarczyk, B. García, R. Garcia López, D. Gardiol, F. Gargano, D. Garrido, L. Garrido, D. Gascon, M. Gaug, J. Gaweda, L. Gebremedhin, N. Geffroy, L. Gerard, A. Ghedina, M. Ghigo, P. Ghislain, E. Giannakaki, F. Gianotti, S. Giarrusso, G. Giavitto, B. Giebels, N. Giglietto, V. Gika, M. Giomi, P. Giommi, F. Giordano, N. Girard, E. Giro, A. Giuliani, T. Glanzman, J.-F. Glicenstein, N. Godinovic, V. Golev, M. Gomez Berisso, J. Gómez-Ortega, M.M. Gonzalez, A. González, F. González, A. González Muñoz, K.S. Gothe, T. Grabarczyk, M. Gougerot, R. Graciani, P. Grandi, F. Grañena, J. Granot, G. Grasseau, R. Gredig, A. Green, T. Greenshaw, T. Grégoire, A. Grillo, O. Grimm, M.-H. Grondin, J. Grube, M. Grudzinska, V. Gruev, S. Grünewald, J. Grygorczuk, V. Guarino, S. Gunji, G. Gyuk, D. Hadasch, A. Hagedorn, R. Hagiwara, J. Hahn, N. Hakansson, A. Hallgren, N. Hamer Heras, S. Hara, M.J. Hardcastle, D. Harezlak, J. Harris, T. Hassan, K. Hatanaka, T. Haubold, A. Haupt, T. Hayakawa, M. Hayashida, R. Heller, F. Henault, G. Henri, G. Hermann, R. Hermel, A. Herrero, O. Hervet, N. Hidaka, J.A. Hinton, K. Hirotani, D. Hoffmann, W. Hofmann, P. Hofverberg, J. Holder, J.R. Hörandel, D. Horns, D. Horville, J. Houles, M. Hrabovsky, D. Hrupec, H. Huan, B. Huber, J.-M. Huet, G. Hughes, T.B. Humensky, J. Huovelin, J.-F. Huppert, A. Ibarra, D. Ikawa, J.M. Illa, D. Impiombato, S. Incorvaia, S. Inoue, Y. Inoue, F. Iocco, K. Ioka, G.L. Israel, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, M. Janiak, P. Jean, C. Jeanney, J.J. Jimenez, T. Jogler, C. Johnson, T. Johnson, L. Journet, C. Juffroy, I. Jung, P. Kaaret, S. Kabuki, M. Kagaya, J. Kakuwa, C. Kalkuhl, R. Kankanyan, A. Karastergiou, K. Kärcher, M. Karczewski, S. Karkar, J. Kasperek, D. Kastana, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, D. Kazanas, N. Kelley-Hoskins, B. Kellner-Leidel, H. Kelly, E. Kendziorra, B. Khélifi, D.B. Kieda, T. Kifune, T. Kihm, T. Kishimoto, K. Kitamoto, W. Kluźniak, C. Knapic, J. Knapp, J. Knödlseder, F. Köck, J. Kocot, K. Kodani, J.-H. Köhne, K. Kohri, K. Kokkotas, D. Kolitzus, N. Komin, I. Kominis, Y. Konno, H. Köppel, P. Korohoda, K. Kosack, G. Koss, R. Kossakowski, R. Koul, G. Kowal, S. Koyama, J. Kozioł, T. Krähenbühl, J. Krause, H. Krawzcynski, F. Krennrich, A. Krepps, A. Kretzschmann, R. Krobot, P. Krueger, H. Kubo, V.A. Kudryavtsev, J. Kushida, A. Kuznetsov, A. La Barbera, N. La Palombara, V. La Parola, G. La Rosa, K. Lacombe, G. Lamanna, J. Lande, D. Languignon, J.S. Lapington, P. Laporte, B. Laurent, C. Lavalley, T. Le Flour, A. Le Padellec, S.-H. Lee, W.H. Lee, J.-P. Lefèvre, H. Leich, M.A. Leigui de Oliveira, D. Lelas, J.-P. Lenain, R. Leoni, D.J. Leopold, T. Lerch, L. Lessio, G. Leto, B. Lieunard, S. Lieunard, R. Lindemann, E. Lindfors, A. Liolios, A. Lipniacka, H. Lockart, T. Lohse, S. Lombardi, F. Longo, A. Lopatin, M. Lopez, R. López-Coto, A. López-Oramas, A. Lorca, E. Lorenz, F. Louis, P. Lubinski, F. Lucarelli, H. Lüdecke, J. Ludwin, P.L. Luque-Escamilla, W. Lustermann, O. Luz, E. Lyard, M.C. Maccarone, T.J. Maccarone, G.M. Madejski, A. Madhavan, M. Mahabir, G. Maier, P. Majumdar, G. Malaguti, G. Malaspina, S. Maltezos, A. Manalaysay, A. Mancilla, D. Mandat, G. Maneva, A. Mangano, P. Manigot, K. Mannheim, I. Manthos, N. Maragos, A. Marcowith, M. Mariotti, M. Marisaldi, S. Markoff, A. Marszałek, C. Martens, J. Martí, J.-M. Martin, P. Martin, G. Martínez, F. Martínez, M. Martínez, F. Massaro, A. Masserot, A. Mastichiadis, A. Mathieu, H. Matsumoto, F. Mattana, S. Mattiazzo, A. Maurer, G. Maurin, S. Maxfield, J. Maya, D. Mazin, L. Mc Comb, A. McCann, N. McCubbin, I. McHardy, R. McKay, K. Meagher, C. Medina, C. Melioli, D. Melkumyan, D. Melo, S. Mereghetti, P. Mertsch, M. Meucci, M. Meyer, J. Michałowski, P. Micolon, A. Mihailidis, T. Mineo, M. Minuti, N. Mirabal, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mistò, T. Mizuno, B. Moal, R. Moderski, I. Mognet, E. Molinari, M. Molinaro, T. Montaruli, C. Monte, I. Monteiro, P. Moore, A. Moralejo Olaizola, M. Mordalska, C. Morello, K. Mori, G. Morlino, A. Morselli, F. Mottez, Y. Moudden, E. Moulin, I. Mrusek, R. Mukherjee, P. Munar-Adrover, H. Muraishi, K. Murase, A. StJ. Murphy, S. Nagataki, T. Naito, D. Nakajima, T. Nakamori, K. Nakayama, C. Naumann, D. Naumann, M. Naumann-Godo, P. Nayman, D. Nedbal, D. Neise, L. Nellen, A. Neronov, V. Neustroev, N. Neyroud, L. Nicastro, J. Nicolau-Kukliński, A. Niedźwiecki, J. Niemiec, D. Nieto, A. Nikolaidis, K. Nishijima, K.-I. Nishikawa, K. Noda, S. Nolan, R. Northrop, D. Nosek, N. Nowak, A. Nozato, L. Oakes, P.T. O'Brien, Y. Ohira, M. Ohishi, S. Ohm, H. Ohoka, T. Okuda, A. Okumura, J.-F. Olive, R.A. Ong, R. Orito, M. Orr, J.P. Osborne, M. Ostrowski, L.A. Otero, N. Otte, E. Ovcharov, I. Oya, A. Ozieblo, L. Padilla, I. Pagano, S. Paiano, D. Paillot, A. Paizis, S. Palanque, M. Palatka, J. Pallota, M. Palatiello, K. Panagiotidis, J.-L. Panazol, D. Paneque, M. Panter, M.R. Panzera, R. Paoletti, A. Papayannis, G. Papyan, J.M. Paredes, G. Pareschi, J.-M. Parraud, D. Parsons, G. Pauletta, M. Paz Arribas, M. Pech, G. Pedaletti, V. Pelassa, D. Pelat, M. d. C. Perez, M. Persic, P.-O. Petrucci, B. Peyaud, A. Pichel, D. Pieloth, E. Pierre, S. Pita, G. Pivato, F. Pizzolato, M. Platino, Ł. Platos, R. Platzer, S. Podkladkin, L. Pogosyan, M. Pohl, G. Pojmanski, J.D. Ponz, W. Potter, J. Poutanen, E. Prandini, J. Prast, R. Preece, F. Profeti, H. Prokoph, M. Prouza, M. Proyetti, I. Puerto-Giménez, G. Pühlhofer, I. Puljak, M. Punch, R. Pyzioł, E.J. Quel, J. Quesada, J. Quinn, A. Quirrenbach, E. Racero, S. Rainò, P.J. Rajda, M. Rameez, P. Ramon, R. Rando, R.C. Rannot, M. Rataj, M. Raue, D. Ravignani, P. Reardon, O. Reimann, A. Reimer, O. Reimer, K. Reitberger, M. Renaud, S. Renner, B. Reville, W. Rhode, M. Ribó, M. Ribordy, G. Richards, M.G. Richer, J. Rico, J. Ridky, F. Rieger, P. Ringegni, J. Ripken, P.R. Ristori, A. Rivière, S. Rivoire, L. Rob, G. Rodeghiero, U. Roeser, R. Rohlfs, G. Rojas, P. Romano, W. Romaszkan, G. E. Romero, S.R. Rosen, S. Rosier Lees, D. Ross, G. Rouaix, J. Rousselle, S. Rousselle, A.C. Rovero, F. Roy, S. Royer, B. Rudak, C. Rulten, M. Rupiński, F. Russo, F. Ryde, O. Saavedra, B. Sacco, E.O. Saemann, A. Saggion, V. Sahakian, K. Saito, T. Saito, Y. Saito, N. Sakaki, R. Sakonaka, A. Salini, F. Sanchez, M. Sanchez-Conde, A. Sandoval, H. Sandaker, E. Sant'Ambrogio, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, N. Sartore, H. Sasaki, K. Satalecka, M. Sawada, V. Scalzotto, V. Scapin, M. Scarcioffolo, J. Schafer, T. Schanz, S. Schlenstedt, R. Schlickeiser, T. Schmidt, J. Schmoll, P. Schovanek, M. Schroedter, A. Schubert, C. Schultz, J. Schultze, A. Schulz, K. Schure, F. Schussler, T. Schwab, U. Schwanke, J. Schwarz, S. Schwarzburg, T. Schweizer, S. Schwemmer, U. Schwendicke, C. Schwerdt, A. Segreto, J.-H. Seiradakis, G.H. Sembroski, M. Servillat, K. Seweryn, M. Sharma, M. Shayduk, R.C. Shellard, J. Shi, T. Shibata, A. Shibuya, S. Shore, E. Shum, E. Sideras-Haddad, L. Sidoli, M. Sidz, J. Sieiro, M. Sikora, J. Silk, A. Sillanpää, B.B. Singh, G. Sironi, J. Sitarek, C. Skole, R. Smareglia, A. Smith, D. Smith, J. Smith, N. Smith, D. Sobczyńska, H. Sol, G. Sottile, M. Sowiński, F. Spanier, D. Spiga, S. Spyrou, V. Stamatescu, A. Stamerra, R.L.C. Starling, Ł. Stawarz, R. Steenkamp, C. Stegmann, S. Steiner, C. Stella, N. Stergioulas, R. Sternberger, M. Sterzel, F. Stinzing, M. Stodulski, Th. Stolarczyk, U. Straumann, E. Strazzeri, L. Stringhetti, A. Suarez, M. Suchenek, R. Sugawara, K.-H. Sulanke, S. Sun, A.D. Supanitsky, T. Suric, P. Sutcliffe, J.M. Sykes, M. Szanecki, T. Szepieniec, A. Szostek, G. Tagliaferri, H. Tajima, H. Takahashi, K. Takahashi, L. Takalo, H. Takami, G. Talbot, J. Tammi, M. Tanaka, S. Tanaka, J. Tasan, M. Tavani, J.-P. Tavernet, L.A. Tejedor, I. Telezhinsky, P. Temnikov, C. Tenzer, Y. Terada, R. Terrier, M. Teshima, V. Testa, D. Tezier, J. Thayer, D. Thuermann, L. Tibaldo, L. Tibaldo, O. Tibolla, A. Tiengo, M.C. Timpanaro, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, A. Tonachini, K. Torii, M. Tornikoski, D.F. Torres, M. Torres, S. Toscano, G. Toso, G. Tosti, T. Totani, F. Toussenel, G. Tovmassian, P. Travnicek, A. Treves, M. Trifoglio, I. Troyano, K. Tsinganos, H. Ueno, G. Umana, K. Umehara, S.S. Upadhya, T. Usher, M. Uslenghi, F. Vagnetti, J.F. Valdes-Galicia, P. Vallania, G. Vallejo, W. van Driel, C. van Eldik, J. Vandenbrouke, J. Vanderwalt, H. Vankov, G. Vasileiadis, V. Vassiliev, D. Veberic, I. Vegas, S. Vercellone, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssière, J.P. Vialle, A. Viana, M. Videla, C. Vigorito, P. Vincent, S. Vincent, J. Vink, N. Vlahakis, L. Vlahos, P. Vogler, V. Voisin, A. Vollhardt, H.-P. von Gunten, S. Vorobiov, C. Vuerli, V. Waegebaert, R. Wagner, R.G. Wagner, S. Wagner, S.P. Wakely, R. Walter, T. Walther, K. Warda, R.S. Warwick, P. Wawer, R. Wawrzaszek, N. Webb, P. Wegner, A. Weinstein, Q. Weitzel, R. Welsing, M. Werner, H. Wetteskind, R.J. White, A. Wierzcholska, S. Wiesand, A. Wilhelm, M.I. Wilkinson, D.A. Williams, R. Willingale, M. Winde, K. Winiarski, R. Wischnewski, Ł. Wiśniewski, P. Wojcik, M. Wood, A. Wörnlein, Q. Xiong, K.K. Yadav, H. Yamamoto, T. Yamamoto, R. Yamazaki, S. Yanagita, J.M. Yebras, D. Yelos, A. Yoshida, T. Yoshida, T. Yoshikoshi, P. Yu, V. Zabalza, M. Zacharias, A. Zajczyk, L. Zampieri, R. Zanin, A. Zdziarski, A. Zech, A. Zhao, X. Zhou, K. Zietara, J. Ziolkowski, P. Ziółkowski, V. Zitelli, C. Zurbach, P. Zychowski
July 29, 2013 hep-ex, astro-ph.IM, astro-ph.HE
Compilation of CTA contributions to the proceedings of the 33rd International Cosmic Ray Conference (ICRC2013), which took place in 2-9 July, 2013, in Rio de Janeiro, Brazil
High confidence AGN candidates among unidentified Fermi-LAT sources via statistical classification (1306.6529)
June 27, 2013 astro-ph.HE
The second Fermi-LAT source catalog (2FGL) is the deepest survey of the gamma-ray sky ever compiled, containing 1873 sources that constitute a very complete sample down to an energy flux of about 10^(-11) erg cm^(-2) s^(-1). While counterparts at lower frequencies have been found for a large fraction of 2FGL sources, active galactic nuclei (AGN) being the most numerous class, 576 gamma-ray sources remain unassociated. In these proceedings, we describe a statistical algorithm that finds candidate AGNs in the sample of unassociated 2FGL sources by identifying targets whose gamma-ray properties resemble those of known AGNs. Using two complementary learning algorithms and intersecting the high-probability classifications from both methods, we increase the confidence of the method and reduce the false-association rate to 11%. Our study finds a high-confidence sample of 231 AGN candidates among the population of 2FGL unassociated sources. Selecting sources out of this sample for follow-up observations or studies of archival data will substantially increase the probability to identify possible counterparts at other wavelengths.
Mrk 421 active state in 2008: the MAGIC view, simultaneous multi-wavelength observations and SSC model constrained (1106.1589)
J. Aleksic, E. A. Alvarez, L. A. Antonelli, P. Antoranz, M. Asensio, M. Backes, J. A. Barrio, D. Bastieri, J. Becerra Gonzalez, W. Bednarek, A. Berdyugin, K. Berger, E. Bernardini, A. Biland, O. Blanch, R. K. Bock, A. Boller, G. Bonnoli, D. Borla Tridon, I. Braun, T. Bretz, A. Canellas, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, L. Cossio, S. Covino, F. Dazzi, A. De Angelis, G. De Caneva, E. De Cea del Pozo, B. De Lotto, C. Delgado Mendez, A. Diago Ortega, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, D. Elsaesser, D. Ferenc, M. V. Fonseca, L. Font, C. Fruck, R. J. Garcia Lopez, M. Garczarczyk, D. Garrido, G. Giavitto, N. Godinovic, D. Hadasch, D. Häfner, A. Herrero, D. Hildebrand, D. Höhne-Mönch, J. Hose, D. Hrupec, B. Huber, T. Jogler, H. Kellermann, S. Klepser, T. Krahenbuh, J. Krause, A. La Barbera, D. Lelas, E. Leonardo, E. Lindfors, S. Lombardi, A. Lopez, M. Lopez, E. Lorenz, M. Makariev, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, M. Mariotti, M. Martinez, D. Mazin, M. Meucci, J. M. Miranda, R. Mirzoyan, H. Miyamoto, J. Moldon, A. Moralejo, P. Munar-Adrover, D. Nieto, K. Nilsson, R. Orito, I. Oya, D. Paneque, R. Paoletti, S. Pardo, J. M. Paredes, S. Partini, M. Pasanen, F. Pauss, M. A. Perez-Torres, M. Persic, L. Peruzzo, M. Pilia, J. Pochon, F. Prada, P. G. Prada Moroni, E. Prandini, I. Puljak, I. Reichardt, R. Reinthal, W. Rhode, M. Ribo, J. Rico, S. Rügamer, A. Saggion, K. Saito, T. Y. Saito, M. Salvati, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, M. Shayduk, S. N. Shore, A. Sillanpaa, J. Sitarek, D. Sobczynska, F. Spanier, S. Spiro, A. Stamerra, B. Steinke, J. Storz, N. Strah, T. Suric, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzic, D. Tescaro, M. Teshima, O. Tibolla, D. F. Torres, A. Treves, M. Uellenbeck, H. Vankov, P. Vogler, R. M. Wagner, Q. Weitzel, V. Zabalza, F. Zandanel, R. Zanin
Context: The blazar Markarian 421 is one of the brightest TeV gamma-ray sources of the northern sky. From December 2007 until June 2008 it was intensively observed in the very high energy (VHE, E > 100 GeV) band by the single-dish Major Atmospheric Gamma-ray Imaging Cherenkov telescope (MAGIC-I). Aims: We aimed to measure the physical parameters of the emitting region of the blazar jet during active states. Methods: We performed a dense monitoring of the source in VHE with MAGIC-I, and also collected complementary data in soft X-rays and optical-UV bands; then, we modeled the spectral energy distributions (SED) derived from simultaneous multi-wavelength data within the synchrotron self--compton (SSC) framework. Results: The source showed intense and prolonged gamma-ray activity during the whole period, with integral fluxes (E > 200 GeV) seldom below the level of the Crab Nebula, and up to 3.6 times this value. Eight datasets of simultaneous optical-UV (KVA, Swift/UVOT), soft X-ray (Swift/XRT) and MAGIC-I VHE data were obtained during different outburst phases. The data constrain the physical parameters of the jet, once the spectral energy distributions obtained are interpreted within the framework of a single-zone SSC leptonic model. Conclusions: The main outcome of the study is that within the homogeneous model high Doppler factors (40 <= delta <= 80) are needed to reproduce the observed SED; but this model cannot explain the observed short time-scale variability, while it can be argued that inhomogeneous models could allow for less extreme Doppler factors, more intense magnetic fields and shorter electron cooling times compatible with hour or sub-hour scale variability.
The 2010 very high energy gamma-ray flare & 10 years of multi-wavelength observations of M 87 (1111.5341)
The H.E.S.S. Collaboration: A. Abramowski, F. Acero, F. Aharonian, A. G. Akhperjanian, G. Anton, A. Balzer, A. Barnacka, U. Barres de Almeida, Y. Becherini, J. Becker, B. Behera, K. Bernlöhr, E. Birsin, J. Biteau, A. Bochow, C. Boisson, J. Bolmont, P. Bordas, J. Brucker, F. Brun, P. Brun, T. Bulik, I. Büsching, S. Carrigan, S. Casanova, M. Cerruti, P. M. Chadwick, A. Charbonnier, R. C. G. Chaves, A. Cheesebrough, A. C. Clapson, G. Coignet, G. Cologna, J. Conrad, M. Dalton, M. K. Daniel, I. D. Davids, B. Degrange, C. Deil, H. J. Dickinson, A. Djannati-Ataï, W. Domainko, L. O'C. Drury, G. Dubus, K. Dutson, J. Dyks, M. Dyrda, K. Egberts, P. Eger, P. Espigat, L. Fallon, C. Farnier, S. Fegan, F. Feinstein, M. V. Fernandes, A. Fiasson, G. Fontaine, A. Förster, M. Füßling, Y. A. Gallant, H. Gast, L. Gérard, D. Gerbig, B. Giebels, J. F. Glicenstein, B. Glück, P. Goret, D. Göring, S. Häffner, J. D. Hague, D. Hampf, M. Hauser, S. Heinz, G. Heinzelmann, G. Henri, G. Hermann, J. A. Hinton, A. Hoffmann, W. Hofmann, P. Hofverberg, M. Holler, D. Horns, A. Jacholkowska, O. C. de Jager, C. Jahn, M. Jamrozy, I. Jung, M. A. Kastendieck, K. Katarzyński, U. Katz, S. Kaufmann, D. Keogh, D. Khangulyan, B. Khélifi, D. Klochkov, W. Kluźniak, T. Kneiske, Nu. Komin, K. Kosack, R. Kossakowski, H. Laffon, G. Lamanna, D. Lennarz, T. Lohse, A. Lopatin, C.-C. Lu, V. Marandon, A. Marcowith, J. Masbou, D. Maurin, N. Maxted, M. Mayer, T. J. L. McComb, M. C. Medina, J. Méhault, R. Moderski, E. Moulin, C. L. Naumann, M. Naumann-Godo, M. de Naurois, D. Nedbal, D. Nekrassov, N. Nguyen, B. Nicholas, J. Niemiec, S. J. Nolan, S. Ohm, E. de Oña Wilhelmi, B. Opitz, M. Ostrowski, I. Oya, M. Panter, M. Paz Arribas, G. Pedaletti, G. Pelletier, P.-O. Petrucci, S. Pita, G. Pühlhofer, M. Punch, A. Quirrenbach, M. Raue, S. M. Rayner, A. Reimer, O. Reimer, M. Renaud, R. de los Reyes, F. Rieger, J. Ripken, L. Rob, S. Rosier-Lees, G. Rowell, B. Rudak, C. B. Rulten, J. Ruppel, V. Sahakian, D. A. Sanchez, A. Santangelo, R. Schlickeiser, F. M. Schöck, A. Schulz, U. Schwanke, S. Schwarzburg, S. Schwemmer, F. Sheidaei, J. L. Skilton, H. Sol, G. Spengler, Ł. Stawarz, R. Steenkamp, C. Stegmann, F. Stinzing, K. Stycz, I. Sushch, A. Szostek, J.-P. Tavernet, R. Terrier, M. Tluczykont, K. Valerius, C. van Eldik, G. Vasileiadis, C. Venter, J. P. Vialle, A. Viana, P. Vincent, H. J. Völk, F. Volpe, S. Vorobiov, M. Vorster, S. J. Wagner, M. Ward, R. White, A. Wierzcholska, M. Zacharias, A. Zajczyk, A. A. Zdziarski, A. Zech, H.-S. Zechlin, The MAGIC Collaboration: J. Aleksić, L. A. Antonelli, P. Antoranz, M. Backes, J. A. Barrio, D. Bastieri, J. Becerra González, W. Bednarek, A. Berdyugin, K. Berger, E. Bernardini, A. Biland, O. Blanch, R. K. Bock, A. Boller, G. Bonnoli, D. Borla Tridon, I. Braun, T. Bretz, A. Cañellas, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, L. Cossio, S. Covino, F. Dazzi, A. De Angelis, E. De Cea del Pozo, B. De Lotto, C. Delgado Mendez, A. Diago Ortega, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, D. Elsaesser, D. Ferenc, M. V. Fonseca, L. Font, C. Fruck, R. J. García López, M. Garczarczyk, D. Garrido, G. Giavitto, N. Godinović, D. Hadasch, D. Häfner, A. Herrero, D. Hildebrand, D. Höhne-Mönch, J. Hose, D. Hrupec, B. Huber, T. Jogler, S. Klepser, T. Krähenbühl, J. Krause, A. La Barbera, D. Lelas, E. Leonardo, E. Lindfors, S. Lombardi, M. López, E. Lorenz, M. Makariev, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, M. Mariotti, M. Martínez, D. Mazin, M. Meucci, J. M. Miranda, R. Mirzoyan, H. Miyamoto, J. Moldón, A. Moralejo, P. Munar, D. Nieto, K. Nilsson, R. Orito, I. Oya, D. Paneque, R. Paoletti, S. Pardo, J. M. Paredes, S. Partini, M. Pasanen, F. Pauss, M. A. Perez-Torres, M. Persic, L. Peruzzo, M. Pilia, J. Pochon, F. Prada, P. G. Prada Moroni, E. Prandini, I. Puljak, I. Reichardt, R. Reinthal, W. Rhode, M. Ribó, J. Rico, S. Rügamer, A. Saggion, K. Saito, T. Y. Saito, M. Salvati, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, M. Shayduk, S. N. Shore, A. Sillanpää, J. Sitarek, D. Sobczynska, F. Spanier, S. Spiro, A. Stamerra, B. Steinke, J. Storz, N. Strah, T. Surić, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, M. Thom, O. Tibolla, D. F. Torres, A. Treves, H. Vankov, P. Vogler, R. M. Wagner, Q. Weitzel, V. Zabalza, F. Zandanel, R. Zanin, The VERITAS Collaboration: T. Arlen, T. Aune, M. Beilicke, W. Benbow, A. Bouvier, S. M. Bradbury, J. H. Buckley, V. Bugaev, K. Byrum, A. Cannon, A. Cesarini, L. Ciupik, M. P. Connolly, W. Cui, R. Dickherber, C. Duke, M. Errando, A. Falcone, J. P. Finley, G. Finnegan, L. Fortson, A. Furniss, N. Galante, D. Gall, S. Godambe, S. Griffin, J. Grube, G. Gyuk, D. Hanna, J. Holder, H. Huan, C. M. Hui, P. Kaaret, N. Karlsson, M. Kertzman, Y. Khassen, D. Kieda, H. Krawczynski, F. Krennrich, M. J. Lang, S. LeBohec, G. Maier, S. McArthur, A. McCann, P. Moriarty, R. Mukherjee, P. D. Nuñez, R. A. Ong, M. Orr, A. N. Otte, N. Park, J. S. Perkins, A. Pichel, M. Pohl, H. Prokoph, K. Ragan, L. C. Reyes, P. T. Reynolds, E. Roache, H. J. Rose, J. Ruppel, M. Schroedter, G. H. Sembroski, G. D. Şentürk, I. Telezhinsky, G. Tešić, M. Theiling, S. Thibadeau, A. Varlotta, V. V. Vassiliev, M. Vivier, S. P. Wakely, T. C. Weekes, D. A. Williams, B. Zitzer, U. Barres de Almeida, M. Cara, C. Casadio, C.C. Cheung, W. McConville, F. Davies, A. Doi, G. Giovannini, M. Giroletti, K. Hada, P. Hardee, D. E. Harris, W. Junor, M. Kino, N. P. Lee, C. Ly, J. Madrid, F. Massaro, C. G. Mundell, H. Nagai, E. S. Perlman, I. A. Steele, R. C. Walker, D. L. Wood
Feb. 20, 2012 astro-ph.CO
Abridged: The giant radio galaxy M 87 with its proximity, famous jet, and very massive black hole provides a unique opportunity to investigate the origin of very high energy (VHE; E>100 GeV) gamma-ray emission generated in relativistic outflows and the surroundings of super-massive black holes. M 87 has been established as a VHE gamma-ray emitter since 2006. The VHE gamma-ray emission displays strong variability on timescales as short as a day. In this paper, results from a joint VHE monitoring campaign on M 87 by the MAGIC and VERITAS instruments in 2010 are reported. During the campaign, a flare at VHE was detected triggering further observations at VHE (H.E.S.S.), X-rays (Chandra), and radio (43 GHz VLBA). The excellent sampling of the VHE gamma-ray light curve enables one to derive a precise temporal characterization of the flare: the single, isolated flare is well described by a two-sided exponential function with significantly different flux rise and decay times. While the overall variability pattern of the 2010 flare appears somewhat different from that of previous VHE flares in 2005 and 2008, they share very similar timescales (~day), peak fluxes (Phi(>0.35 TeV) ~= (1-3) x 10^-11 ph cm^-2 s^-1), and VHE spectra. 43 GHz VLBA radio observations of the inner jet regions indicate no enhanced flux in 2010 in contrast to observations in 2008, where an increase of the radio flux of the innermost core regions coincided with a VHE flare. On the other hand, Chandra X-ray observations taken ~3 days after the peak of the VHE gamma-ray emission reveal an enhanced flux from the core. The long-term (2001-2010) multi-wavelength light curve of M 87, spanning from radio to VHE and including data from HST, LT, VLA and EVN, is used to further investigate the origin of the VHE gamma-ray emission. No unique, common MWL signature of the three VHE flares has been identified.
Observations of the Crab pulsar between 25 GeV and 100 GeV with the MAGIC I telescope (1108.5391)
MAGIC Collaboration: J. Aleksić, E. A. Alvarez, L. A. Antonelli, P. Antoranz, M. Asensio, M. Backes, J. A. Barrio, D. Bastieri, J. Becerra González, W. Bednarek, A. Berdyugin, K. Berger, E. Bernardini, A. Biland, O. Blanch, R. K. Bock, A. Boller, G. Bonnoli, D. Borla Tridon, I. Braun, T. Bretz, A. Cañellas, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, L. Cossio, S. Covino, F. Dazzi, A. De Angelis, G. De Caneva, E. De Cea del Pozo, B. De Lotto, C. Delgado Mendez, A. Diago Ortega, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, D. Eisenacher, D. Elsaesser, D. Ferenc, M. V. Fonseca, L. Font, C. Fruck, R. J. García López, M. Garczarczyk, D. Garrido, G. Giavitto, N. Godinović, D. Hadasch, D. Häfner, A. Herrero, D. Hildebrand, D. Höhne-Mönch, J. Hose, D. Hrupec, T. Jogler, H. Kellermann, S. Klepser, T. Krähenbühl, J. Krause, J. Kushida, A. La Barbera, D. Lelas, E. Leonardo, E. Lindfors, S. Lombardi, M. López, A. López-Oramas, E. Lorenz, M. Makariev, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, M. Meucci, J. M. Miranda, R. Mirzoyan, J. Moldón, A. Moralejo, P. Munar-Adrover, D. Nieto, K. Nilsson, R. Orito, N. Otte, I. Oya, D. Paneque, R. Paoletti, S. Pardo, J. M. Paredes, S. Partini, M. A. Perez-Torres, M. Persic, L. Peruzzo, M. Pilia, J. Pochon, F. Prada, P. G. Prada Moroni, E. Prandini, I. Puerto Gimenez, I. Puljak, I. Reichardt, R. Reinthal, W. Rhode, M. Ribó, J. Rico, M. Rissi, S. Rügamer, A. Saggion, K. Saito, T. Y. Saito, M. Salvati, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, M. Shayduk, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, F. Spanier, S. Spiro, V. Stamatescu, A. Stamerra, B. Steinke, J. Storz, N. Strah, T. Surić, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, O. Tibolla, D. F. Torres, A. Treves, M. Uellenbeck, H. Vankov, P. Vogler, R. M. Wagner, Q. Weitzel, V. Zabalza, F. Zandanel, R. Zanin, K. Hirotani
Nov. 8, 2011 astro-ph.HE
We report on the observation of $\gamma$-rays above 25\,GeV from the Crab pulsar (PSR B0532+21) using the MAGIC I telescope. Two data sets from observations during the winter period 2007/2008 and 2008/2009 are used. In order to discuss the spectral shape from 100\,MeV to 100\,GeV, one year of public {\it Fermi} Large Area Telescope ({\it Fermi}-LAT) data are also analyzed to complement the MAGIC data. The extrapolation of the exponential cutoff spectrum determined with the Fermi-LAT data is inconsistent with MAGIC measurements, which requires a modification of the standard pulsar emission models. In the energy region between 25 and 100\,GeV, the emission in the P1 phase (from -0.06 to 0.04, location of the main pulse) and the P2 phase (from 0.32 to 0.43, location of the interpulse) can be described by power laws with spectral indices of $-3.1 \pm 1.0_{stat} \pm 0.3_{syst}$ and $-3.5 \pm 0.5_{stat} \pm 0.3_{syst}$, respectively. Assuming an asymmetric Lorentzian for the pulse shape, the peak positions of the main pulse and the interpulse are estimated to be at phases $-0.009 \pm 0.007$ and $0.393 \pm 0.009$, while the full widths at half maximum are $0.025 \pm 0.008$ and $0.053 \pm 0.015$, respectively.
Performance of the MAGIC Stereo System (1110.0947)
E. Carmona, J. Sitarek, P. Colin, M. Doert, S. Klepser, S. Lombardi, M. Lo Pez, A. Moralejo, S. Pardo, V. Scalzotto, R. Zanin (for the Magic Collaboration)
MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes sensitive above ~60 GeV, and located on the Canary Island of La Palma at the height of 2200 m.a.s.l. Since Autumn 2009 both telescopes are working together in stereoscopic mode. We use both Crab Nebula observations and Monte Carlo simulations to evaluate the performance of the system. Advanced stereo analysis allows MAGIC to achieve a sensitivity better than 0.8% of the Crab Nebula flux in 50 h of observations in the medium energy range (around a few hundred GeV). At those energies the angular resolution is better than 0.07{\circ}, and the energy resolution is as good as 16%. We perform also a detailed study of possible systematics effects for the MAGIC telescopes.
MAGIC Upper Limits for two Milagro-detected, Bright Fermi Sources in the Region of SNR G65.1+0.6 (1007.3359)
J. Aleksić, L. A. Antonelli, P. Antoranz, M. Backes, J. A. Barrio, D. Bastieri, J. Becerra González, W. Bednarek, A. Berdyugin, K. Berger, E. Bernardini, A. Biland, O. Blanch, R. K. Bock, A. Boller, G. Bonnoli, P. Bordas, D. Borla Tridon, V. Bosch-Ramon, D. Bose, I. Braun, T. Bretz, M. Camara, E. Carmona, A. Carosi, P. Colin, J. L. Contreras, J. Cortina, S. Covino, F. Dazzi, A. De Angelis, E. De Cea del Pozo, B. De Lotto, M. De Maria, F. De Sabata, C. DelgadoMendez, A. Diago Ortega, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, D. Elsaesser, M. Errando, D. Ferenc, M. V. Fonseca, L. Font, R. J. García López, M. Garczarczyk, M. Gaug, G. Giavitto, N. Godinović, D. Hadasch, A. Herrero, D. Hildebrand, D. Höhne-Mönch, J. Hose, D. Hrupec, T. Jogler, S. Klepser, T. Krähenbühl, D. Kranich, J. Krause, A. La Barbera, E. Leonardo, E. Lindfors, S. Lombardi, F. Longo, M. López, E. Lorenz, P. Majumdar, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, M. Mariotti, M. Martínez, D. Mazin, M. Meucci, J. M. Miranda, R. Mirzoyan, H. Miyamoto, J. Moldón, A. Moralejo, D. Nieto, K. Nilsson, R. Orito, I. Oya, R. Paoletti, J. M. Paredes, S. Partini, M. Pasanen, F. Pauss, R. G. Pegna, M. A. Perez-Torres, M. Persic, L. Peruzzo, J. Pochon, F. Prada, P. G. Prada Moroni, E. Prandini, N. Puchades, I. Puljak, I. Reichardt, R. Reinthal, W. Rhode, M. Ribó, J. Rico, M. Rissi, S. Rügamer, A. Saggion, K. Saito, T. Y. Saito, M. Salvati, M. Sánchez-Conde, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, M. Shayduk, S. N. Shore, A. Sierpowska-Bartosik, A. Sillanpää, J. Sitarek, D. Sobczynska, F. Spanier, S. Spiro, A. Stamerra, B. Steinke, J. Storz, N. Strah, J. C. Struebig, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, D. F. Torres, H. Vankov, R. M. Wagner, Q. Weitzel, V. Zabalza, F. Zandanel, R. Zanin
April 4, 2011 astro-ph.HE
We report on the observation of the region around supernova remnant G65.1+0.6 with the stand-alone MAGIC-I telescope. This region hosts the two bright GeV gamma-ray sources 1FGL J1954.3+2836 and 1FGL J1958.6+2845. They are identified as GeV pulsars and both have a possible counterpart detected at about 35 TeV by the Milagro observatory. MAGIC collected 25.5 hours of good quality data, and found no significant emission in the range around 1 TeV. We therefore report differential flux upper limits, assuming the emission to be point-like (<0.1 deg) or within a radius of 0.3 deg. In the point-like scenario, the flux limits around 1 TeV are at the level of 3 % and 2 % of the Crab Nebula flux, for the two sources respectively. This implies that the Milagro emission is either extended over a much larger area than our point spread function, or it must be peaked at energies beyond 1 TeV, resulting in a photon index harder than 2.2 in the TeV band. | CommonCrawl |
Only show content I have access to (743)
Only show open access (99)
Chapters (1197)
Last month (2)
Last 12 months (115)
Last 3 years (359)
Over 3 years (4332)
Engineering (1061)
Physics And Astronomy (1037)
Materials Research (824)
Life Sciences (823)
Earth and Environmental Sciences (405)
Politics and International Relations (304)
Area Studies (256)
MRS Online Proceedings Library Archive (777)
Infection Control & Hospital Epidemiology (220)
Weed Technology (88)
Epidemiology & Infection (82)
Weed Science (82)
American Political Science Review (71)
The Journal of Agricultural Science (69)
British Journal of Nutrition (64)
The British Journal of Psychiatry (63)
Microscopy and Microanalysis (61)
Psychological Medicine (61)
Ancient Mesoamerica (52)
The Journal of Asian Studies (50)
American Antiquity (48)
The Journal of Laryngology & Otology (48)
Journal of Fluid Mechanics (47)
Proceedings of the International Astronomical Union (46)
Parasitology (42)
American Journal of International Law (40)
Mineralogical Magazine (39)
Cambridge University Press (1141)
Boydell & Brewer (67)
Anthem Press (18)
Liverpool University Press (16)
ISEAS–Yusof Ishak Institute (4)
Acumen Publishing (2)
Pickering & Chatto (1)
Materials Research Society (820)
Society for Healthcare Epidemiology of America (SHEA) (220)
Weed Science Society of America (180)
International Astronomical Union (84)
Nutrition Society (84)
The Royal College of Psychiatrists (80)
Women and Politics Section-APSA (76)
The Association for Asian Studies (70)
Brazilian Society for Microscopy and Microanalysis (SBMM) (61)
American Society of International Law (53)
Society for American Archaeology (53)
The Australian Society of Otolaryngology Head and Neck Surgery (48)
Royal College of Speech and Language Therapists (47)
American Society of Church History (41)
BSAS (41)
Mineralogical Society (40)
AEPC Association of European Paediatric Cardiology (35)
The Paleontological Society (34)
Ryan Test (24)
test society (24)
Classical Association (22)
The New Cambridge Shakespeare (80)
Cambridge Aerospace Series (46)
Medieval European Coinage (33)
Cambridge Library Collection - Polar Exploration (31)
Cambridge Studies in the History of Psychology (31)
Shakespeare in Production (29)
Cambridge Handbooks in Psychology (25)
Cambridge Companions to Music (20)
Lecture Notes in Logic (19)
Pubns Manchester Centre for Anglo-Saxon Studies (17)
Studies in Early Modern Cultural, Political and Social history (17)
Cambridge Library Collection - Hakluyt First Series (16)
Cambridge Studies in American Literature and Culture (16)
Writers and their Work (16)
Conservation Biology (13)
International Research Monographs in the Addictions (12)
Ecological Reviews (11)
Studies in German Literature Linguistics and Culture (10)
International Symposia in Economic Theory and Econometrics (9)
Cambridge Studies in Biological and Evolutionary Anthropology (8)
Chaucer Studies (8)
Cambridge Textbooks (138)
Cambridge Shakespeare (109)
Cambridge Library Collection (47)
Cambridge Handbooks (29)
Cambridge Handbooks of Psychology (28)
Cambridge Companions (26)
The Cambridge Companions to Music (20)
Cambridge Histories (8)
The Cambridge Companions to Philosophy and Religion (5)
Cambridge Histories - Religion (4)
Cambridge Histories - Middle East & African Studies (3)
Cambridge Histories - Ancient History & Classics (2)
Cambridge Histories - British & European History (2)
Cambridge Histories - Global History (2)
Cambridge Companions to Literature and Classics (1)
Cambridge Handbooks of Linguistics (1)
The Syrian Conflict's Impact on International Law
Michael P. Scharf, Milena Sterio, Paul R. Williams
Expected online publication date: March 2020
Print publication: 31 March 2020
Proliferation of Faulty Materials Data Analysis in the Literature
Matthew R. Linford, Vincent S. Smentkowski, John T. Grant, C. Richard Brundle, Peter M.A. Sherwood, Mark C. Biesinger, Jeff Terry, Kateryna Artyushkova, Alberto Herrera-Gómez, Sven Tougaard, William Skinner, Jean-Jacques Pireaux, Christopher F. McConville, Christopher D. Easton, Thomas R. Gengenbach, George H. Major, Paul Dietrich, Andreas Thissen, Mark Engelhard, Cedric J. Powell, Karen J. Gaskell, Donald R. Baer
Journal: Microscopy and Microanalysis , First View
Published online by Cambridge University Press: 17 January 2020, pp. 1-2
Incidence and outcomes of prosthetic valve endocarditis in adults with tetralogy of Fallot
Alexander C. Egbe, Srikanth Kothapalli, William R. Miranda, Raja Jadav, Keerthana Banala, Rahul Vojjini, Faizan Faizee, Fouad Khalil, Maria Najam, Mounika Angirekula, Daniel C. Desimone, Heidi M. Connolly
Journal: Cardiology in the Young , First View
The risk of endocarditis varies with CHD complexity and the presence of prosthetic valves. The purpose of the study was therefore to describe incidence and outcomes of prosthetic valve endocarditis in adults with repair tetralogy of Fallot.
Retrospective review of adult tetralogy of Fallot patients who underwent prosthetic valve implantation, 1990–2017. We defined prosthetic valve endocarditis-related complications as prosthetic valve dysfunction, perivalvular extension of infection such abscess/aneurysm/fistula, heart block, pulmonary/systemic embolic events, recurrent endocarditis, and death due to sepsis.
A total of 338 patients (age: 37 ± 15 years) received 352 prosthetic valves (pulmonary [n = 308, 88%], tricuspid [n = 13, 4%], mitral [n = 9, 3%], and aortic position [n = 22, 6%]). The annual incidence of prosthetic valve endocarditis was 0.4%. There were 12 prosthetic valve endocarditis-related complications in six patients, and these complications were prosthetic valve dysfunction (n = 4), systemic/pulmonary embolic events (n = 2), heart block (n = 1), aortic root abscess (n = 1), recurrent endocarditis (n = 2), and death due to sepsis (n = 1). Three (50%) patients required surgery at 2 days, 6 weeks, and 23 weeks from the time of prosthetic valve endocarditis diagnosis. Altogether three of the six (50%) patients died, and one of these deaths was due to sepsis.
The incidence, complication rate, and outcomes of prosthetic valve endocarditis in tetralogy of Fallot patients underscore some of the risks of having a prosthetic valve. It is important to educate the patients on the need for early presentation if they develop systemic symptoms, have a high index of suspicion for prosthetic valve endocarditis, and adopt a multi-disciplinary care approach in this high-risk population.
Calculating individualized risk components using a mobile app-based risk calculator for clinical high risk of psychosis: findings from ShangHai At Risk for Psychosis (SHARP) program
TianHong Zhang, LiHua Xu, HuiJun Li, Kristen A. Woodberry, Emily R. Kline, Jian Jiang, HuiRu Cui, YingYing Tang, XiaoChen Tang, YanYan Wei, Li Hui, Zheng Lu, LiPing Cao, ChunBo Li, Margaret A. Niznikiewicz, Martha E. Shenton, Matcheri S. Keshavan, William S. Stone, JiJun Wang
Journal: Psychological Medicine , First View
Published online by Cambridge University Press: 16 December 2019, pp. 1-8
Only 30% or fewer of individuals at clinical high risk (CHR) convert to full psychosis within 2 years. Efforts are thus underway to refine risk identification strategies to increase their predictive power. Our objective was to develop and validate the predictive accuracy and individualized risk components of a mobile app-based psychosis risk calculator (RC) in a CHR sample from the SHARP (ShangHai At Risk for Psychosis) program.
In total, 400 CHR individuals were identified by the Chinese version of the Structured Interview for Prodromal Syndromes. In the first phase of 300 CHR individuals, 196 subjects (65.3%) who completed neurocognitive assessments and had at least a 2-year follow-up assessment were included in the construction of an RC for psychosis. In the second phase of the SHARP sample of 100 subjects, 93 with data integrity were included to validate the performance of the SHARP-RC.
The SHARP-RC showed good discrimination of subsequent transition to psychosis with an AUC of 0.78 (p < 0.001). The individualized risk generated by the SHARP-RC provided a solid estimation of conversion in the independent validation sample, with an AUC of 0.80 (p = 0.003). A risk estimate of 20% or higher had excellent sensitivity (84%) and moderate specificity (63%) for the prediction of psychosis. The relative contribution of individual risk components can be simultaneously generated. The mobile app-based SHARP-RC was developed as a convenient tool for individualized psychosis risk appraisal.
The SHARP-RC provides a practical tool not only for assessing the probability that an individual at CHR will develop full psychosis, but also personal risk components that might be targeted in early intervention.
Science with the Murchison Widefield Array: Phase I results and Phase II opportunities
A. P. Beardsley, M. Johnston-Hollitt, C. M. Trott, J. C. Pober, J. Morgan, D. Oberoi, D. L. Kaplan, C. R. Lynch, G. E. Anderson, P. I. McCauley, S. Croft, C. W. James, O. I. Wong, C. D. Tremblay, R. P. Norris, I. H. Cairns, C. J. Lonsdale, P. J. Hancock, B. M. Gaensler, N. D. R. Bhat, W. Li, N. Hurley-Walker, J. R. Callingham, N. Seymour, S. Yoshiura, R. C. Joseph, K. Takahashi, M. Sokolowski, J. C. A. Miller-Jones, J. V. Chauhan, I. Bojičić, M. D. Filipović, D. Leahy, H. Su, W. W. Tian, S. J. McSweeney, B. W. Meyers, S. Kitaeff, T. Vernstrom, G. Gürkan, G. Heald, M. Xue, C. J. Riseley, S. W. Duchesne, J. D. Bowman, D. C. Jacobs, B. Crosse, D. Emrich, T. M. O. Franzen, L. Horsley, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, M. Walker, R. B. Wayth, A. Williams, C. Wu
Journal: Publications of the Astronomical Society of Australia / Volume 36 / 2019
Published online by Cambridge University Press: 13 December 2019, e050
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together $60+$ programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
Biodiversity, systematics, and new taxa of cladid crinoids from the Ordovician Brechin Lagerstätte
David F. Wright, Selina R. Cole, William I. Ausich
Journal: Journal of Paleontology , First View
Published online by Cambridge University Press: 29 November 2019, pp. 1-24
Upper Ordovician (Katian) strata of the Lake Simcoe region of Ontario record a spectacularly diverse and abundant echinoderm fauna known as the Brechin Lagerstätte. Despite recognition as the most taxonomically diverse Katian crinoid paleocommunity, the Brechin Lagerstätte has received relatively little taxonomic study since Frank Springer published his classic monograph on the "Kirkfield fauna" in 1911.
Using a new collection of exceptionally preserved material, we evaluate all dicyclic inadunate crinoids occurring in the Brechin Lagerstätte, which is predominantly comprised of cladids (Eucladida and Flexibilia). We document 15 species across 11 genera, including descriptions of two new genera and four new species. New taxa include Konieckicrinus brechinensis n. gen. n. sp., K. josephi n. gen. n. sp., Simcoecrinus mahalaki n. gen. n. sp., and Dendrocrinus simcoensis n. sp.
Although cladids are not commonly considered major components of the Early Paleozoic Crinoid Macroevolutionary Fauna, which is traditionally conceived as dominated by disparids and diplobathrid camerates, they are the most diverse major lineage of crinoids occurring in the Brechin Lagerstätte. This unexpected result highlights the important roles of specimen-based taxonomy and systematic revisions in the study of large-scale diversity patterns.
UUID: http://zoobank.org/09dda7c2-f2c5-4411-93be-3587ab1652ab
26 - Compositional and Mineralogic Analyses of Mars Using Multispectral Imaging on the Mars Exploration Rover, Phoenix, and Mars Science Laboratory Missions
from Part IV - Applications to Planetary Surfaces
By James F. Bell, William H. Farrand, Jeffrey R. Johnson, Kjartan M. Kinch, Mark Lemmon, Mario C. Parente, Melissa S. Rice, Danika Wellington
Edited by Janice L. Bishop, James F. Bell III, Arizona State University, Jeffrey E. Moersch, University of Tennessee, Knoxville
Book: Remote Compositional Analysis
Published online: 15 November 2019
Print publication: 28 November 2019, pp 513-537
View extract
Multispectral imaging – the acquisition of spatially contiguous imaging data in a modest number (~3–16) of spectral bandpasses – has proven to be a powerful technique for augmenting panchromatic imaging observations on Mars focused on geologic and/or atmospheric context. Specifically, multispectral imaging using modern digital CCD photodetectors and narrowband filters in the 400–1100 nm wavelength region on the Mars Pathfinder, Mars Exploration Rover, Phoenix, and Mars Science Laboratory missions has provided new information on the composition and mineralogy of fine-grained regolith components (dust, soils, sand, spherules, coatings), rocky surface regions (cobbles, pebbles, boulders, outcrops, and fracture-filling veins), meteorites, and airborne dust and other aerosols. Here we review recent scientific results from Mars surface-based multispectral imaging investigations, including the ways that these observations have been used in concert with other kinds of measurements to enhance the overall scientific return from Mars surface missions.
Developing a text messaging-based smoking cessation intervention for young smokers experiencing homelessness
Joan S. Tucker, Sebastian Linnemayr, Eric R. Pedersen, William G. Shadel, Rushil Zutshi, Alexandra Mendoza-Graf
Journal: Journal of Smoking Cessation , First View
Published online by Cambridge University Press: 28 November 2019, pp. 1-9
Cigarette smoking is highly prevalent among young people experiencing homelessness, and many of these smokers are motivated to quit. However, there is a lack of readily available cessation services for this population, which is highly mobile and can be challenging to engage in services.
We describe the development of a smoking cessation text messaging intervention (TMI) for homeless youth who are interested in quitting smoking.
Participants were 18–25 years old and recruited from drop-in centers serving homeless youth. Three focus groups (N = 18) were conducted with smokers to refine the TMI content, and a separate sample of smokers (N = 8) provided feedback on the TMI after using it for 1 week. Survey data assessed the TMI's acceptability and feasibility.
Participants generally rated the TMI as helpful and relevant, and nearly all had cell phone plans that included unlimited texting and were able to view TMI content with few difficulties. Qualitative feedback on strengths/limitations of the TMI in terms of content, tone, and delivery parameters was used to finalize the TMI for a future evaluation.
Results suggest that a TMI is a feasible and acceptable option for young people experiencing homelessness who are interested in quitting smoking.
Infectious disease outbreaks in the African region: overview of events reported to the World Health Organization in 2018 – ERRATUM
F. Mboussou, P. Ndumbi, R. Ngom, Z. Kassamali, O. Ogundiran, J. Van Beek, G. Williams, C. Okot, E. L. Hamblion, B. Impouma
Journal: Epidemiology & Infection / Volume 147 / 2019
Published online by Cambridge University Press: 27 November 2019, e307
Improved bounds on horizontal convection
Cesar B. Rocha, Thomas Bossy, Stefan G. Llewellyn Smith, William R. Young
Journal: Journal of Fluid Mechanics / Volume 883 / 25 January 2020
Published online by Cambridge University Press: 27 November 2019, A41
Print publication: 25 January 2020
For the problem of horizontal convection the Nusselt number based on entropy production is bounded from above by $C\,Ra^{1/3}$ as the horizontal convective Rayleigh number $Ra\rightarrow \infty$ for some constant $C$ (Siggers et al., J. Fluid Mech., vol. 517, 2004, pp. 55–70). We re-examine the variational arguments leading to this 'ultimate regime' by using the Wentzel–Kramers–Brillouin method to solve the variational problem in the $Ra\rightarrow \infty$ limit and exhibiting solutions that achieve the ultimate $Ra^{1/3}$ scaling. As expected, the optimizing flows have a boundary layer of thickness ${\sim}Ra^{-1/3}$ pressed against the non-uniformly heated surface; but the variational solutions also have rapid oscillatory variation with wavelength ${\sim}Ra^{-1/3}$ along the wall. As a result of the exact solution of the variational problem, the constant $C$ is smaller than the previous estimate by a factor of $2.5$ for no-slip and $1.6$ for no-stress boundary conditions. This modest reduction in $C$ indicates that the inequalities used by Siggers et al. (J. Fluid Mech., vol. 517, 2004, pp. 55–70) are surprisingly accurate.
Reduced limbic microstructural integrity in functional neurological disorder
Ibai Diez, Benjamin Williams, Marek R. Kubicki, Nikos Makris, David L. Perez
Functional neurological disorder (FND) is a condition at the intersection of neurology and psychiatry. Individuals with FND exhibit corticolimbic abnormalities, yet little is known about the role of white matter tracts in the pathophysiology of FND. This study characterized between-group differences in microstructural integrity, and correlated fiber bundle integrity with symptom severity, physical disability, and illness duration.
A diffusion tensor imaging (DTI) study was performed in 32 patients with mixed FND compared to 36 healthy controls. Diffusion-weighted magnetic resonance images were collected along with patient-reported symptom severity, physical disability (Short Form Health Survey-36), and illness duration data. Weighted-degree and link-level graph theory and probabilistic tractography analyses characterized fractional anisotropy (FA) values across cortico-subcortical connections. Results were corrected for multiple comparisons.
Compared to controls, FND patients showed reduced FA in the stria terminalis/fornix, medial forebrain bundle, extreme capsule, uncinate fasciculus, cingulum bundle, corpus callosum, and striatal-postcentral gyrus projections. Except for the stria terminalis/fornix, these differences remained significant adjusting for depression and anxiety. In within-group analyses, physical disability inversely correlated with stria terminalis/fornix and medial forebrain bundle FA values; illness duration negatively correlated with stria terminalis/fornix white matter integrity. A FND symptom severity composite score did not correlate with FA in patients.
In this first DTI study of mixed FND, microstructural differences were observed in limbic and associative tracts implicated in salience, defensive behaviors, and emotion regulation. These findings advance our understanding of neurocircuit pathways in the pathophysiology of FND.
A VOEvent-based automatic trigger system for the Murchison Widefield Array
P. J. Hancock, G. E. Anderson, A. Williams, M. Sokolowski, S. E. Tremblay, A. Rowlinson, B. Crosse, B. W. Meyers, C. R. Lynch, A. Zic, A. P. Beardsley, D. Emrich, T. M. O. Franzen, L. Horsley, M. Johnston-Hollitt, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, C. M. Trott, M. Walker, R. B. Wayth, C. Wu
The Murchison Widefield Array (MWA) is an electronically steered low-frequency (<300 MHz) radio interferometer, with a 'slew' time less than 8 s. Low-frequency (∼100 MHz) radio telescopes are ideally suited for rapid response follow-up of transients due to their large field of view, the inverted spectrum of coherent emission, and the fact that the dispersion delay between a 1 GHz and 100 MHz pulse is on the order of 1–10 min for dispersion measures of 100–2000 pc/cm3. The MWA has previously been used to provide fast follow-up for transient events including gamma-ray bursts (GRBs), fast radio bursts (FRBs), and gravitational waves, using systems that respond to gamma-ray coordinates network packet-based notifications. We describe a system for automatically triggering MWA observations of such events, based on Virtual Observatory Event standard triggers, which is more flexible, capable, and accurate than previous systems. The system can respond to external multi-messenger triggers, which makes it well-suited to searching for prompt coherent radio emission from GRBs, the study of FRBs and gravitational waves, single pulse studies of pulsars, and rapid follow-up of high-energy superflares from flare stars. The new triggering system has the capability to trigger observations in both the regular correlator mode (limited to ≥0.5 s integrations) and using the Voltage Capture System (VCS, 0.1 ms integration) of the MWA and represents a new mode of operation for the MWA. The upgraded standard correlator triggering capability has been in use since MWA observing semester 2018B (July–Dec 2018), and the VCS and buffered mode triggers will become available for observing in a future semester.
Infectious disease outbreaks in the African region: overview of events reported to the World Health Organization in 2018
The WHO African region is characterised by the largest infectious disease burden in the world. We conducted a retrospective descriptive analysis using records of all infectious disease outbreaks formally reported to the WHO in 2018 by Member States of the African region. We analysed the spatio-temporal distribution, the notification delay as well as the morbidity and mortality associated with these outbreaks. In 2018, 96 new disease outbreaks were reported across 36 of the 47 Member States. The most commonly reported disease outbreak was cholera which accounted for 20.8% (n = 20) of all events, followed by measles (n = 11, 11.5%) and Yellow fever (n = 7, 7.3%). About a quarter of the outbreaks (n = 23) were reported following signals detected through media monitoring conducted at the WHO regional office for Africa. The median delay between the disease onset and WHO notification was 16 days (range: 0–184). A total of 107 167 people were directly affected including 1221 deaths (mean case fatality ratio (CFR): 1.14% (95% confidence interval (CI) 1.07%–1.20%)). The highest CFR was observed for diseases targeted for eradication or elimination: 3.45% (95% CI 0.89%–10.45%). The African region remains prone to outbreaks of infectious diseases. It is therefore critical that Member States improve their capacities to rapidly detect, report and respond to public health events.
14 - Earth as Organic Chemist
By Everett Shock, Christiana Bockisch, Charlene Estrada, Kristopher Fecteau, Ian R. Gould, Hilairy Hartnett, Kristin Johnson, Kirtland Robinson, Jessie Shipp, Lynda Williams
Edited by Beth N. Orcutt, Isabelle Daniel, Université Claude-Bernard Lyon I, Rajdeep Dasgupta, Rice University, Houston
Book: Deep Carbon
Published online: 03 October 2019
Print publication: 17 October 2019, pp 415-446
The Earth is a powerful organic chemist, transforming vast quantities of carbon through complex processes, leading to diverse suites of products that include the fossil fuels upon which modern societies depend. When exploring how the Earth operates as an organic chemist, it is tempting to turn to how organic reactions are traditionally studied in chemistry labs. While highly informative, especially in terms of insights gained into reaction mechanisms, this approach can also be a source of frustration, as many of the reactants and conditions employed in chemistry labs have few or no parallels to geologic processes. The primary goal of this chapter is to provide examples of predicting thermodynamic influences and using the predictions to design experiments that reveal the mechanisms of how reactions occur at the elevated temperatures and pressures encountered in the Earth. This work is ongoing, and we hope this chapter will inspire numerous and diverse experimental and theoretical advances in hydrothermal organic geochemistry.
Treatment life and economic comparisons of honey mesquite (Prosopis glandulosa) and huisache (Vachellia farnesiana) herbicide programs in rangeland
Case R. Medlin, W. Allan McGinty, C. Wayne Hanselka, Robert K. Lyons, Megan K. Clayton, William J. Thompson
Journal: Weed Technology / Volume 33 / Issue 6 / December 2019
Published online by Cambridge University Press: 09 October 2019, pp. 763-772
Herbicides have been a primary means of managing undesirable brush on grazing lands across the southwestern United States for decades. Continued encroachment of honey mesquite and huisache on grazing lands warrants evaluation of treatment life and economics of current and experimental treatments. Treatment life is defined as the time between treatment application and when canopy cover of undesirable brush returns to a competitive level with native forage grasses (i.e., 25% canopy cover for mesquite and 30% canopy cover for huisache). Treatment life of industry-standard herbicides was compared with that of aminocyclopyrachlor plus triclopyr amine (ACP+T) from 10 broadcast-applied honey mesquite and five broadcast-applied huisache trials established from 2007 through 2013 across Texas. On average, the treatment life of industry standard treatments (IST) for huisache was 3 yr. In comparison, huisache canopy cover was only 2.5% in plots treated with ACP+T 3 yr after treatment. The average treatment life of IST for honey mesquite was 8.6 yr, whereas plots treated with ACP+T had just 2% mesquite canopy cover at that time. Improved treatment life of ACP+T compared with IST life was due to higher mortality resulting in more consistent brush canopy reduction. The net present values (NPVs) of ACP+T and IST for both huisache and mesquite were similar until the treatment life of the IST application was reached (3 yr for huisache and 8.6 yr for honey mesquite). At that point, NPVs of the programs diverged as a result of brush competition with desirable forage grasses and additional input costs associated with theoretical follow-up IST necessary to maintain optimum livestock forage production. The ACP+T treatments did not warrant a sequential application over the 12-yr analysis for huisache or 20-yr analysis for honey mesquite that this research covered. These results indicate ACP+T provides cost-effective, long-term control of honey mesquite and huisache.
Subfossil lemur discoveries from the Beanka Protected Area in western Madagascar
David A. Burney, Haingoson Andriamialison, Radosoa A. Andrianaivoarivelo, Steven Bourne, Brooke E. Crowley, Erik J. de Boer, Laurie R. Godfrey, Steven M. Goodman, Christine Griffiths, Owen Griffiths, Julian P. Hume, Walter G. Joyce, William L. Jungers, Stephanie Marciniak, Gregory J. Middleton, Kathleen M. Muldoon, Eliette Noromalala, Ventura R. Pérez, George H. Perry, Roger Randalana, Henry T. Wright
Journal: Quaternary Research / Volume 93 / Issue 1 / January 2020
Print publication: January 2020
A new fossil site in a previously unexplored part of western Madagascar (the Beanka Protected Area) has yielded remains of many recently extinct vertebrates, including giant lemurs (Babakotia radofilai, Palaeopropithecus kelyus, Pachylemur sp., and Archaeolemur edwardsi), carnivores (Cryptoprocta spelea), the aardvark-like Plesiorycteropus sp., and giant ground cuckoos (Coua). Many of these represent considerable range extensions. Extant species that were extirpated from the region (e.g., Prolemur simus) are also present. Calibrated radiocarbon ages for 10 bones from extinct primates span the last three millennia. The largely undisturbed taphonomy of bone deposits supports the interpretation that many specimens fell in from a rock ledge above the entrance. Some primates and other mammals may have been prey items of avian predators, but human predation is also evident. Strontium isotope ratios (87Sr/86Sr) suggest that fossils were local to the area. Pottery sherds and bones of extinct and extant vertebrates with cut and chop marks indicate human activity in previous centuries. Scarcity of charcoal and human artifacts suggests only occasional visitation to the site by humans. The fossil assemblage from this site is unusual in that, while it contains many sloth lemurs, it lacks ratites, hippopotami, and crocodiles typical of nearly all other Holocene subfossil sites on Madagascar.
Julie Codell and Linda K. Hughes, eds. Replication in the Long Nineteenth Century: Re-makings and Reproductions. Edinburgh: Edinburgh University Press, 2018. Pp. 320. $125.00 (cloth).
William R. McKelvy
Journal: Journal of British Studies / Volume 58 / Issue 4 / October 2019
Dissipation of radiation energy in concentrated solid-solution alloys: Unique defect properties and microstructural evolution
Yanwen Zhang, Takeshi Egami, William J. Weber
Journal: MRS Bulletin / Volume 44 / Issue 10 / October 2019
The effort to develop metallic alloys with increased structural strength and improved radiation performance has focused on the incorporation of either solute elements or microstructural inhomogeneities to mitigate damage. The recent discovery and development of single-phase concentrated solid-solution alloys (SP-CSAs) has prompted fundamental questions that challenge established theories and models currently applicable to conventional alloys. The current understanding of electronic and atomic effects, defect evolution, and microstructure progression suggests that radiation energy dissipates in SP-CSAs at different interaction strengths via energy carriers (electrons, phonons, and magnons). Modification of electronic- and atomic-level heterogeneities and tailoring of atomic transport processes can be realized through tuning of the chemical complexity of SP-CSAs by the selection of appropriate elements and their concentrations. Fundamental understanding of controlling energy dissipation via site-to-site chemical complexity reveals new design principles for predictive discovery and guided synthesis of new alloys with targeted functionalities, including radiation tolerance.
Boom and bust in Bronze Age Britain: major copper production from the Great Orme mine and European trade, c. 1600–1400 BC
R. Alan Williams, Cécile Le Carlier de Veslud
Journal: Antiquity / Volume 93 / Issue 371 / October 2019
Published online by Cambridge University Press: 15 October 2019, pp. 1178-1196
The Great Orme Bronze Age copper mine in Wales is one of Europe's largest, although its size has been attributed to a small-scale, seasonal labour force working for nearly a millennium. Here, the authors report the results of interdisciplinary research that provides evidence that Great Orme was the focus of Britain's first mining boom, c. 1600–1400 BC, probably involving a full-time mining community and the wide distribution of metalwork from Brittany to Sweden. This new interpretation suggests greater integration than previously suspected of Great Orme metal into the European Bronze Age trade/exchange networks, as well as more complex local and regional socio-economic interactions.
An Archaeology of Abundance: Reevaluating the Marginality of California's Islands. KRISTINA M. GILL MIKAEL FAUVELLE, and JON M. ERLANDSON, editors. 2019. University Press of Florida, Gainesville. xvii + 307 pp. $100.00 (hardcover), ISBN 978-0-8130-5616-6.
William R. Hildebrandt
Journal: American Antiquity , First View
Published online by Cambridge University Press: 30 September 2019, pp. 1-2 | CommonCrawl |
Homework Problems
Memorize Power Series (PDF)
Memorize Power Series
This problem is used in the following sequences
1. << Approximating Functions with Power Series | Power Series Sequence (E&M) | Power Series Practice >>
assignment Memorize $d\vec{r}$
Memorize \(d\vec{r}\)
Write \(\vec{dr}\) in rectangular, cylindrical, and spherical coordinates.
Rectangular: \begin{equation} \vec{dr}= \end{equation}
Cylindrical: \begin{equation} \vec{dr}= \end{equation}
Spherical: \begin{equation} \vec{dr}= \end{equation}
face Thermal radiation and Planck distribution
Thermal radiation and Planck distribution
Planck distribution blackbody radiation photon statistical mechanics
These notes from the fourth week of Thermal and Statistical Physics cover blackbody radiation and the Planck distribution. They include a number of small group activities.
assignment Potential vs. Potential Energy
Potential vs. Potential Energy
In this course, two of the primary examples we will be using are the potential due to gravity and the potential due to an electric charge. Both of these forces vary like \(\frac{1}{r}\), so they will have many, many similarities. Most of the calculations we do for the one case will be true for the other. But there are some extremely important differences:
Find the value of the electrostatic potential energy of a system consisting of a hydrogen nucleus and an electron separated by the Bohr radius. Find the value of the gravitational potential energy of the same two particles at the same radius. Use the same system of units in both cases. Compare and the contrast the two answers.
Find the value of the electrostatic potential due to the nucleus of a hydrogen atom at the Bohr radius. Find the gravitational potential due to the nucleus at the same radius. Use the same system of units in both cases. Compare and contrast the two answers.
Briefly discuss at least one other fundamental difference between electromagnetic and gravitational systems. Hint: Why are we bound to the earth gravitationally, but not electromagnetically?
assignment Series Notation 1
Series Notation 1
Write out the first four nonzero terms in the series:
\[\sum\limits_{n=0}^\infty \frac{1}{n!}\]
\[\sum\limits_{n=1}^\infty \frac{(-1)^n}{n!}\]
\begin{equation} \sum\limits_{n=0}^\infty {(-2)^{n}\,\theta^{2n}} \end{equation}
assignment Contours
Shown below is a contour plot of a scalar field, \(\mu(x,y)\). Assume that \(x\) and \(y\) are measured in meters and that \(\mu\) is measured in kilograms. Four points are indicated on the plot.
Determine \(\frac{\partial\mu}{\partial x}\) and \(\frac{\partial\mu}{\partial y}\) at each of the four points.
On a printout of the figure, draw a qualitatively accurate vector at each point corresponding to the gradient of \(\mu(x,y)\) using your answers to part a above. How did you choose a scale for your vectors? Describe how the direction of the gradient vector is related to the contours on the plot and what property of the contour map is related to the magnitude of the gradient vector.
Evaluate the gradient of \(h(x,y)=(x+1)^2\left(\frac{x}{2}-\frac{y}{3}\right)^3\) at the point \((x,y)=(3,-2)\).
assignment Boltzmann probabilities
Boltzmann probabilities
Energy Entropy Boltzmann probabilities Thermal and Statistical Physics 2020 (3 years) Consider a three-state system with energies \((-\epsilon,0,\epsilon)\).
At infinite temperature, what are the probabilities of the three states being occupied? What is the internal energy \(U\)? What is the entropy \(S\)?
At very low temperature, what are the three probabilities?
What are the three probabilities at zero temperature? What is the internal energy \(U\)? What is the entropy \(S\)?
What happens to the probabilities if you allow the temperature to be negative?
assignment Nucleus in a Magnetic Field
assignment Power from the Ocean
Power from the Ocean
heat engine efficiency Energy and Entropy 2021 (2 years)
It has been proposed to use the thermal gradient of the ocean to drive a heat engine. Suppose that at a certain location the water temperature is \(22^\circ\)C at the ocean surface and \(4^{o}\)C at the ocean floor.
What is the maximum possible efficiency of an engine operating between these two temperatures?
If the engine is to produce 1 GW of electrical power, what minimum volume of water must be processed every second? Note that the specific heat capacity of water \(c_p = 4.2\) Jg\(^{-1}\)K\(^{-1}\) and the density of water is 1 g cm\(^{-3}\), and both are roughly constant over this temperature range.
assignment Power Series Coefficients 2
Power Series Coefficients 2
Static Fields 2022 (4 years) Use the formula for a Taylor series: \[f(z)=\sum_{n=0}^{\infty} \frac{1}{n!} \frac{d^n f(a)}{dz^n} (z-a)^n\] to find the first three non-zero terms of a series expansion for \(f(z)=e^{-kz}\) around \(z=3\).
Static Fields 2022 (4 years) Use the formula for a Taylor series: \[f(z)=\sum_{n=0}^{\infty} \frac{1}{n!} \frac{d^n f(a)}{dz^n} (z-a)^n\] to find the first three non-zero terms of a series expansion for \(f(z)=\cos(kz)\) around \(z=2\).
Look up and memorize the power series to fourth order for \(e^z\), \(\sin z\), \(\cos z\), \((1+z)^p\) and \(\ln(1+z)\). For what values of \(z\) do these series converge? | CommonCrawl |
Practically Efficient
About Archive
From boiling lead and black art: An essay on the history of mathematical typography
Math fonts from six different type systems, courtesy Chalkdust
I've always felt like constructing printed math was much more of an art form than regular typesetting. Someone typesetting mathematics is less a "typist" and more an artist attempting to render abstract data on a two-dimensional surface. Mathematical symbols are themselves a language, but they are fundamentally a visual representation of human-conceived knowledge—knowledge that would be too inefficient to convey through verbal explanations. This brings the typesetting of mathematics closer to a form of data visualization than regular printed text.
No matter how hard it's ever been to create printed text, creating printed math has always been even harder. In pre-digital times, equation-laden texts were known as "penalty copy" because of the significant additional time and expense it took to set math notation for printing presses.
Even when modern word processors like Microsoft Word include equation editors, they tend to be difficult to use and often produce unpleasing results. While LaTeX and similar variants produce the highest quality digital math type, these frameworks also have much more of a learning barrier than general word processing.
But these modern quibbles are much more the fault of hedonic adaption than any of the tools available to us today. We have it vastly easier than any previous stage of civilization, and I think it's critically important for those of us that write math to have at least a basic awareness of the history of mathematical typesetting.
For me, knowing this history has had several practical benefits. It's made me more grateful for the writing tools I have today—tools that I can use to simplify and improve the presentation of quantitative concepts to other actuaries. It's also motivated me to continue to strive for elegance in the presentation of math—something I feel like my profession has largely neglected in the Microsoft Office era of the last twenty years.
Most importantly, it's reminded me just how much of an art the presentation of all language has always been. Because pre-Internet printing required so many steps, so many different people, so much physical craftsmanship, and so much waiting, there were more artistic layers between the author's original thoughts and the final arrangement of letters and figures on pages. More thinking occurred throughout the entire process.
To fully appreciate mathematical typography, we have to first appreciate the general history of typography, which is also a history of human civilization. No other art form has impacted our lives more than type.
The first two Internets
While the full history of printing dates back many more centuries, few would disagree that Johannes Gutenberg's 15th-century printing press was the big bang moment for literacy. It was just as much of an Internet-like moment as the invention of the telegraph or the Internet itself.
Before Gutenberg, reading was the realm of elites and scholars. After Gutenberg, book production exploded, and reading became exponentially more practical to the masses. Literacy rates soared. Reformations happened.
The Gutenberg Printing Press
I would argue that the invention of the printing press was on par with the evolutionary "invention" of human language itself. In The Origins of Political Order, Francis Fukuyama explains that spoken language catalyzed the separation of humans from lower forms of primates:
The development of language not only permits the short-term coordination of action but also opens up the possibility of abstraction and theory, critical cognitive faculties that are unique to human beings. Words can refer to concrete objects as well as to abstract classes of objects (dogs, trees) and to abstractions that refer to invisible forces (Zeus, gravity).
Language also permits practical survival advantages for families and social groups:
By not stepping on the snake or eating the root that killed your cousin last week, you avoid being subject to the same fate, and you can quickly communicate that rule to your offspring.
Oral communication became not only a survival skill, but a tool of incredible influence. Rhetoric and the art of persuasion were highly valued in Greek and Roman societies.
If spoken language was the first human "Internet," mass printing was the next key milestone in the democratization of human knowledge. Mass production of printed material amplified the human voice by incalculable orders of magnitude beyond oral communication.
Of boiling lead and black art
Like all inventors, Johannes Gutenberg didn't really make anything new so much as he combined existing materials and technologies in new ways. Gutenberg didn't invent printing. He didn't invent the press. He didn't even invent movable type, which typically involves arranging (typesetting) casts of individual letters that can be brushed or dipped in ink and pressed to a page.
Metal movable type arranged by hand
Gutenberg's key innovation was really in the typecasting process. Before Gutenberg's time, creating letters out of metal, wood, and even ceramic was extremely time consuming and difficult to do in large quantities. Gutenberg revolutionized hot metal typesetting by coming up with an alloy mostly made of lead that could be melted and poured into a letter mold called a matrix. He also had to invent an ink that would stick to lead.
His lead alloy and matrix concepts are really the reasons the name Gutenberg became synonymous with printing. In fact, the lead mixture he devised was so effective, it continued to be used well into the 20th century, and most typecasting devices created after his time continued using a similar matrix case to mold type.
From a workflow perspective, Gutenberg's innovation was to separate typecasting from typesetting. With more pieces of type available, simply adding more people to the process allowed for more typesetting. With more typeset pages available, printing presses could generate more pages per hour. And more pages, of course, meant more books.
But let's not kid ourselves. Even post-Gutenberg, typesetting a single book was still an extremely tedious process. Gutenberg's first masterpiece, the Gutenberg Bible (c. 1450s), was—and still is—considered a remarkable piece of art. It required nearly 300 pieces of individual type. Every upper and lower case instance of every letter and every symbol required its own piece of lead. Not only did each character have to be set individually by hand, justification required manual word spacing line by line.
The Gutenberg Bible
Even though Gutenberg's innovations allowed books to be printed faster than ever before, it was an excruciating process by today's one-click standard. But it was within those moments spent arranging characters and lines that the so called "black art" of book printing flourished. Typesetting even a basic text was an intimate, human process.
A better way to cast hot lead
The art of hand-setting type would be passed down from generation to generation for over 400 years until the Industrial Revolution began replacing human hands with machines in all aspects of life. The most famous of the late 19th century technologies to refine typesetting were Monotype and Linotype, both invented in America.
The Monotype System was invented by American-born Tolbert Lanston, and Linotype was invented by German immigrant Ottmar Mergenthaler. Both men improved on the system Gutenberg devised centuries earlier, but each added their own take on the art of shaping hot lead into type.
Because Linotype machines could produce entire fused lines of justified lead type at a time, they became extremely popular for most books, newspapers, and magazines. Just imagine the look on people's faces when they were told they could stack entire lines of metal type rather than having to arrange each letter individually first!
Four lines of Linotype, courtesy Deep Wood Press
The Monotype System produced individual pieces of type. It could not produce the same lines per hour as Linotype, but it maintained the art and flexibility of setting individual pieces of type. Monotype also took a more mathematical approach to typesetting:
In many ways the key innovation in the Monotype System was not the mechanical device, ingenious as it was. To allow the casting of fully justified lines of type, Tolbert Lanston chose not to follow the path of Ottmar Merganthaler, who used tapered spacebands to create word spacing. He instead devised a unit system that assigned each character a value, from five to eighteen, that corresponded to its width. A lower case "i", or a period would be five units, an uppercase "W" would be eighteen. This allowed the development of the calculating mechanism in the keyboard, which is central to the sophistication of Monotype set matter. (Letterpress Commons)
And so it was fitting that Monotype, while slower than Linotype, offered more sophistication and ended up a favorite for mathematical texts and publications containing non-standard characters and symbols.
The Monotype System is an exquisite piece of engineering, and in many ways represents a perfection of Gutenberg's original workflow using Industrial Age technology. It's also a fantastic example of early "programming" since it made use of hole-punched paper tape to instruct the operations of a machine—an innovation that many people associate with the rise of computing in the mid-20th century, but was in use as early as 1725.
Like Gutenberg, Lanston sought to refine the workflow of typesetting by dividing it into specialized sub-steps. The Monotype System consisted of two machines: a giant keyboard and type caster.
The keyboard had distinct keys for different cases of letters, numbers, and common symbols. The keyboard operator's job was essentially to type character-by-character and make decisions about where to end lines. Once a line was ended, the machine would calculate the word spacing required to justify the line and punch holes into the paper tape. The caster was designed to read the hole patterns to determine how to actually cast the lines.
Therefore, a print shop could accelerate the "input" phase of typecasting by simply adding more keyboards (and people) to the process. This was a significant improvement over hand setting because a keyboard operator could generate more tape per hour than a human compositor could arrange type by hand.
The caster machine was also very efficient. As it read the tape line by line, it would inject hot, liquid lead into each type matrix, then output water-cooled type into a galley, where it came out pre-assembled into justified lines.
At this stage, the Monotype System offered a major advantage over Linotype. If a compositor—or anyone proofing the galley— found an error, the type could be fixed by hand with relative ease (especially if only a single character needed correcting).
It's also easy to see why Monotype was superior to Linotype for technical writing, including mathematics. Even though the Monotype keyboard had tons of keys and could be modified for special purposes, it wasn't designed to generate mathematical notation.
As I said earlier, no matter how hard it's ever been to create text, creating math has always been even harder. Daniel Rhatigan:
Despite the efficiency of the standard Monotype system, mechanical composition could only accommodate the most basic mathematical notation. Simple single-line expressions might be set without manual intervention, but most maths call for a mix of roman and italic characters, numerals, Greek symbols, superior and inferior characters, and many other symbols. To ease the process, printers and Monotype itself often urged authors to use alternate forms of notation that could be set more easily, but the clarity of the subject matter often depended on notation that was more difficult to set.
Even if there were room in the matrix case for all the symbols needed at one time, the frequent use of oversize characters, strip rules, and stacked characters and symbols require type set on alternate body sizes and fitted together like a puzzle. This wide variety of type styles and sizes made if [sic] costly to set text with even moderately complex mathematics, since so much time and effort went into composing the material by hand at the make-up stage.
The complex arrangement of characters and spaces required to compose mathematics with metal type, courtesy The Printing of Mathematics (1954)
While the Monotype System would never fully displace the hand composition of math, UK-based Monotype Corporation made great strides toward this end in the 1950s with a new 4-line system for setting equations. The 4-line system essentially divided the standard equation line into four regions: regions one and two were in the upper half of the line, while regions three and four were in the lower half. It also allowed for a thin, two-point-high strip between the second and third regions. This middle strip was exactly the height of a standard equals sign (=) and was a key feature distinguishing Monotype's 4-line system from the competing "Patton method" for 4-line math equations developed in the U.S.
The 4-line system, via Daniel Rhatigan in "The Monotype 4-Line System for Setting Mathematics"
While Monotype's 4-line system would standardize mathematical typography more than ever before, allowing for many math symbols to be set using a modified Monotype keyboard, it would prove to be the "last hoorah" for Monotype's role in mathematical typography—and more generally, the era of hot metal type. Roughly a decade after the 4-line system was put into production, type would go cold forever.
The typewriter compromise
The 20th century, particularly post-World War II, saw an explosion in scientific literature, not just in academia but in the public and private sector as well. Telecommunications booms and space races don't happen without a lot of math sharing.
Monotype was only a solution for publications worth the cost of sending to a printing press. Many technical papers were "printed" using a typewriter. Equations could either be written in by hand or composed on something like an IBM Selectric typewriter, which became very popular in the 1960s. Typewriters were office mainstays well into the late 20th century.
An actuarial paper composed by typewriter with handwritten math (1989)
Larger departments at businesses and universities not only had legions of secretarial workers capable of typing papers, but many had technical typists as well. Anecdotes like this one from a Math Overflow commenter, Peter May, highlight the daily struggles that took place:
At Chicago in the 1960's and 1970's we had a technical typist who got to the point that he, knowing no mathematics, could and did catch mathematical mistakes just from the look of things. He also considered himself an artist, and it was a real battle to get things the way you and not he wanted them.
The Selectric's key feature was a golf ball-sized typeball that could be interchanged. One of the typeballs IBM made contained math symbols, so a typist could simply swap out typeballs as needed to produce a paper containing math notation. However, the printed results were arguably worse aesthetically than handwritten math and not even comparable to Monotype.
An equation composed on an IBM Selectric typewriter, courtesy Nick Higham
Molding at the speed of light
As the second half of the 20th century progressed, technological progress would make it easier and easier to indulge those who preferred speed to aesthetics. In the 1960s, phototypesetting—which was actually invented right after World War II but had to "wait" on several computer-era innovations to fully come of age—rapidly replaced hot lead and metal matrixes with light and film negatives.
Every aspect of phototypesetting was dramatically faster than hot metal type setting. As phototypesetting matured, text could be entered on a screen rather than the traditional keyboarding process required for Monotype and Linotype. This made it much easier to catch errors during the keyboarding process.
A man operating a Lumitype 450, a popular phototypesetting machine in the 1960s
Phototypesetters could generate hundreds of characters per second by rapidly flashing light through the film negative matrix. And instead of arranging lead galleys of type, compositors began arranging what were essentially photographs of text.
Phototypesetting also offered more flexibility. With a Monotype or Linotype machine, font sizes were constrained by the physical size of the matrix. Such physical constraints don't apply to light, which could easily be magnified in a phototypesetter to create larger versions of characters.
Even though Monotype would linger into the 1980s in extremely limited use, it was essentially extinct by the mid-1970s. The allure of phototypesetting's speed and low cost was impossible for print companies to resist.
Phototypesetting was indeed the new king of typography—but it would prove to be a mere figure head appointed by the burgeoning computer age. As we all know now, anything computers can make viable, they can also replace. Clark Coffee:
Without a computer to drive them, phototypesetters are just like the old Linotype machines except that they produce paper instead of lead. But, with a computer, all of the old Typesetters' decisions can be programmed. We can kern characters with abandon, dictionaries and programs can make nearly all hyphenations correctly, lines and columns can be justified, and special effects like dropped capitals become routine.
In the late 1970s, computers had become advanced enough to do such things, but of course computers, themselves, don't want to make art. Computers need instructions from artists. Fortunately for all of us, there was such an artist with the programming chops and passion to upload the art of typesetting into the digital age.
A new matrix filled with ones and zeros
While many probably looked at photo-composed typography with indifference, one man did not. It just so happened that there as a brilliant mathematician and computer scientist that cared a lot about how printed math looked.
Donald Knuth, a professor of computer science at Stanford University, was writing a projected seven-volume survey entitled The Art of Computer Programming. Volume 3 was published in 1973, composed with Monotype. By then, computer science had advanced to the point where a revised edition of volume 2 was in order but Monotype composition was no longer possible. The galleys returned to Knuth by his publisher were photocomposed. Knuth was distressed: the results looked so awful that it discouraged him from wanting to write any more. But an opportunity presented itself in the form of the emerging digital output devices—images of letters could be constructed of zeros and ones. This was something that he, as a computer scientist, understood. Thus began the development of TeX. (Barbara Beeton and Richard Palais)
Donald Knuth (1970s)
By 1978, Knuth was ready to announce TeX ("tek"1) to the world at the annual meeting of the American Mathematical Society (AMS). In his lecture, subsequently published by the American Mathematical Society in March 1979, Knuth proclaimed that:
Mathematics books and journals do not look as beautiful as they used to. It is not that their mathematical content is unsatisfactory, rather that the old and well-developed traditions of typesetting have become too expensive. Fortunately, it now appears that mathematics itself can be used to solve this problem. (AMS)
The gravity of this assertion is difficult to appreciate today. It's not so much a testament to Knuth's brilliance as mathematician and computer scientist—there were certainly others in the 1970s with comparable math and computer skills.2 What makes Knuth's role in typographical history so special was just how much he cared about the appearance of typography in the 1970s—and the fact that he used his technical abilities to emulate the art he so appreciated from the Monotype era.
This was not a trivial math problem:
The [hot lead era] Typesetter was solely responsible for the appearance of every page. The wonderful vagaries of hyphenation, particularly in the English language, were entirely in the Typesetter's control (for example, the word "present" as a noun hyphenates differently than the same word as a verb). Every special feature: dropped capitals, hyphenation, accented characters, mathematical formulas and equations, rules, tables, indents, footnotes, running heads, ligatures, etc. depended on the skill and esthetic judgment of the Typesetter. (Clark Coffee)
Knuth acknowledges that he was not the first person to engineer letters, numbers, and symbols using mathematical techniques. Others had attempted this as early as the 15th century, but they were constrained by a much simpler mathematical toolbox (mainly lines and circles) that simply could not orchestrate the myriad nuances of fine typography.
By the 1970s, however, there were three key innovations available for Knuth to harness. First, math had become far more sophisticated: cubic splines made it possible to define precise formulas for any character shape. Second, computers made it possible to program Knuth's formulas for consistent repetition. Computers also made it possible to loop through lines of text, making decisions about word spacing for line justification—even retrospectively hyphenating words to achieve optimal word spacing within a paragraph. Third, digital printing had become viable, and despite Knuth's highly discerning tastes, he was apparently satisfied with its output.
In Knuth's words:
… I was quite skeptical about digital typography, until I saw an actual sample of what was done on a high quality machine and held it under a magnifying glass: It was impossible to tell that the letters were generated with a discrete raster! The reason for this is not that our eyes can't distinguish more than 1000 points per inch; in appropriate circumstances they can. The reason is that particles of ink can't distinguish such fine details—you can't print the edge of an ink line that zigzags 1000 times on the diagonal of a square inch, the ink will round off the edges. In fact the critical number seems to be more like 500 than 1000. Thus the physical properties of ink cause it to appear as if there were no raster at all.
Knuth was certain that it was time to help typography leap over phototypesetting—from matrices of hot lead to pages of pixels.
While developing TeX and Metafont, I'm sure Knuth had several "this has to be the future" moments—probably not unlike Steve Jobs standing over the first Apple I prototype in a California garage only a year or two earlier. Indeed, just like other more celebrated Jobsian innovators of the late 20th century, Knuth's creative energy was driven by the future he saw for his innovation:
Within another ten years or so, I expect that the typical office typewriter will be replaced by a television screen attached to a keyboard and to a small computer. It will be easy to make changes to a manuscript, to replace all occurrences of one phrase by another and so on, and to transmit the manuscript either to the television screen, or to a printing device, or to another computer. Such systems are already in use by most newspapers, and new experimental systems for business offices actually will display the text in a variety of fonts. It won't be long before these machines change the traditional methods of manuscript preparation in universities and technical laboratories.
Today, we take it for granted that computers can instantly render pretty much anything we can dream up in our minds, but this was closer to science fiction in the late 1970s. While the chief goal of TeX was to use mathematics to automate the setting of characters in the output, he also wanted the input to be as pleasing and logical as possible to the human eye.3
For example, the following TeX syntax:
$y = \sqrt{x} + {x - 1 \over 2}$
will render:
\[y = \sqrt{x} + {x - 1 \over 2}\]
TeX was a remarkable invention, but its original form could only be used in a handful of locations—a few mainframe computers here and there. What really allowed TeX to succeed was its portability—something made possible by TeX82, a second version of TeX created for multiple platforms in 1982 with the help of Frank Liang. With TeX82, Knuth also implemented a device independent file format (DVI) for TeX output. With the right DVI driver, any printer could read the binary instructions in the DVI file and translate it to graphical (print) output.
Knuth would only make one more major update to TeX in 1989: TeX 3.0 was expanded to accept 256 input characters instead of the original 128. This change came at the urging of TeX's rapidly growing European user base who wanted the ability to enter accented characters and ensure proper hyphenation in non-English texts.
Except for minor bug fixes, Knuth was adamant that TeX should not be updated again beyond version 3:
I have put these systems into the public domain so that people everywhere can use the ideas freely if they wish. I have also spent thousands of hours trying to ensure that the systems produce essentially identical results on all computers. I strongly believe that an unchanging system has great value, even though it is axiomatic that any complex system can be improved. Therefore I believe that it is unwise to make further "improvements" to the systems called TeX and METAFONT. Let us regard these systems as fixed points, which should give the same results 100 years from now that they produce today.
This level of restraint was as poetic as Knuth's work to save the centuries-old art of mathematical typography from the rapidly-changing typographical industry. Now that he had solved the mathematics of typography, he saw no reason to disrupt the process solely for the sake of disruption.
Some thirty years after TeX 3.0 was released, its advanced line justification algorithm still runs circles around other desktop publishing tools. There is no better example than Roel Zinkstok's comparison of the first paragraph of Moby Dick set using Microsoft Word, Adobe InDesign, and pdfLaTeX (a LaTeX macro package that outputs TeX directly to PDF).
Following 3.0, Knuth wanted point release updates to follow the progression of π (the current version is 3.14159265). Knuth also declared that on his death, the version number should be permanently set to π. "From that moment on," he ordained "all 'bugs' will be permanent 'features.'"
Refining content creation
In The TeXbook, Knuth beautifully captures the evolutionary feedback loop between humans and technological tools of expression:
When you first try to use TeX, you'll find that some parts of it are very easy, while other things will take some getting used to. A day or so later, after you have successfully typeset a few pages, you'll be a different person; the concepts that used to bother you will now seem natural, and you'll be able to picture the final result in your mind before it comes out of the machine. But you'll probably run into challenges of a different kind. After another week your perspective will change again, and you'll grow in yet another way; and so on. As years go by, you might become involved with many different kinds of typesetting; and you'll find that your usage of TeX will keep changing as your experience builds. That's the way it is with any powerful tool: There's always more to learn, and there are always better ways to do what you've done before.
Even though TeX itself was frozen at version 3, that didn't stop smart people from finding better ways to use it. TeX 3 was extremely good at typesetting, but its users still had to traverse a non-trivial learning curve to get the most out of its abilities, especially for complex documents and books. In 1985, Leslie Lamport created LaTeX ("lah-tek" or "lay-tek") to further streamline the input phase of the TeX process. LaTeX became extremely popular in academia in the 1990s, and the current version (originally released in 1994) is still the "side" of TeX that the most TeX users see today.
LaTeX is essentially a collection of TeX macros that make creating the content of a TeX document more efficient and make the necessary commands more concise. In doing this, LaTeX brings TeX even closer to the ideal of human-readable source content, allowing the writer to focus on the critically important task of content creation before worrying about the appearance of the output.
LaTeX refined the visual appearance of certain math syntax by adding new commands like \frac, which makes it easier to discern the numerator from the denominator in a fraction. So with LaTeX, we would rewrite the previous equation in this form:
$y = \sqrt{x} + \frac{x - 1}{2}$
LaTeX also added many macros that make it easier to compose very large documents and books. For example, LaTeX has built-in \chapter, \section, \subsection, and even \subsubsection commands with predefined (but highly customizable) formatting. Commands like these allow the typical LaTeX user to avoid working directly with the so-called "primitives" in TeX. Essentially, the user instructs LaTeX, and LaTeX instructs TeX.
LaTeX's greatest power of all, however, is its extensibility though the packages developed by its active "super user" base. There are thousands of LaTeX packages in existence today and most of them come pre-installed with modern TeX distributions like TeX Live. There are multiple LaTeX packages to enable and extend every conceivable aspect of document and book design—from math extensions that accommodate every math syntax under the sun (even actuarial) to special document styles to powerful vector graphics packages like PGF/TiKZ. There is even a special document class called Beamer that will generate presentation slides from LaTeX, complete with transitions.
A 3D vector image created with PGF/TiKZ
Collectively, these packages, along with the stable underlying code base of TeX, make LaTeX an unrivaled document preparation and publishing system. Despite the popularity of WYSIWYG word processors Microsoft Word since the 1990s, they can't come close to the power of LaTeX or the elegance of its output.
It's worth noting that LaTeX isn't the only macro layer available for TeX. ConTeXt and others have their own unique syntax to achieve the same goals.
Beyond printed paper
As sophisticated as TeX was, it filled the same role that typecasting and typesetting machines had since Gutenberg's time: TeX's job was to tell a printer how to arrange ink on paper. Beginning with TeX82, this was accomplished with a special file format Knuth created called DVI (device independent format). While the TeX file was human-readable, DVI was only printer-readable: essentially a bit matrix that told the printer which pixels should be black and which should remain white.
Even though computers began radically changing the print industry starting in the 1970s, paper would remain the dominant medium on which people read print through the end of the 20th century. But things began changing irreversibly in the 1990s. Computer screens were getting better and more numerous. The Internet also made it easier than ever to share information among computers. It was only natural that people began not just "computing" on computer screens, but also reading more and more on computer screens.
In 1993, Adobe unveiled a new portable device format (PDF) in an attempt to make cross-platform digital reading easier. PDF was essentially a simplified version of Adobe's popular desktop publishing file format, PostScript, but unlike PostScript, PDF was designed to be easier to read on a screen.
PDF would spend most of the 1990s relatively unknown to most people. It was a proprietary format that not only required a several-thousand-dollar investment in Adobe Acrobat software to create, it also required a $50 Adobe Acrobat Reader program to view. Adobe later made Acrobat Reader available for free, but the proprietary nature of PDF and relatively limited Internet connectivity of the early 1990s didn't exactly provide an environment for PDF to flourish.
By the late 1990s, however, PDF had gotten the attention of Hàn Thế Thành, a graduate student who wanted to use TeX to publish his master's thesis and Ph.D. dissertation directly to PDF. Thành applied his interest in micro-typography to create pdfTeX, a version of TeX capable of typesetting TeX files directly to PDF without creating a DVI file at all.
pdfTeX preserved all of the typographical excellence in TeX and also added a number of micro-typographical features that can be accessed through the LaTeX microtype package. Micro-typography deals with the finer aspects of typography, including Gutenberg-inspired ways of optimizing the justification of lines—like using multiple versions of the same glyph and hanging punctuation techniques.
pdfTeX also harnessed the digital features of PDF, like hyperlinking and table of contents structures. As the general popularity of PDF continued to grow into the 2000s, and once Adobe released the PDF standard to the International Organization for Standardization in 2007, pdfTeX became an essential version of TeX. Today it is included by default in any standard TeX package along with pdfLaTeX, which interprets LaTeX files for the pdfTeX program.
It's worth recognizing that Donald Knuth did not create TeX to speed up the publishing process. He wanted to emulate the appearance of Monotype using mathematics. But with the evolution of LaTeX, pdfTeX, and the Internet, TeX ended up enabling what probably seemed unimaginable to anyone waiting weeks for their galley proofs to come in the mail before the 1970s. Today, thanks to TeX and modern connectivity, we can publish extremely sophisticated documents for a nearly unlimited audience in a matter of seconds.
The next innovation in typography: slowing down
I think a lot of people have this idea that pure mathematics is the polar opposite of art. A left brain versus right brain thing, if you will. I actually think that math's role in the human experience requires artistry as much as logical thinking: logic to arrive at the mathematical truths of our universe and artistry to communicate those truths back across the universe.
As George Johnson writes in Fire in the Mind:
… numbers, equations, and physical laws are neither ethereal objects in a platonic phantom zone nor cultural inventions like chess, but simply patterns of information—compressions—generated by an observer coming into contact with the world… The laws of physics are compressions made by information gatherers. They are stored in the forms of markings—in books, on magnetic tapes, in the brain. They are part of the physical world.
Our ability to mark the universe has greatly expanded since prehistoric people first disturbed the physical world with their thoughts on cave walls. For most of recorded history, writing meant having to translate thoughts through lead, ink, and paper. Untold numbers of highly skilled people were involved in the artistry of pre-digital typesetting. Even though their skills were made obsolete by technological evolution, we can be thankful that people like Donald Knuth fossilized typographical artistry in the timelessness of mathematics.
And so here we are now—in a time when written language needs only subatomic ingredients like electricity and light to be conveyed to other human beings. Our ability to "publish" our thoughts is nearly instantaneous, and our audience has become global, if not universal as we spill quantum debris out into the cosmos.
Today, faster publishing is no longer an interesting problem. It's an equation that's been solved—it can't be reduced further.
As with so many other aspects of modern life, technology has landed us in an evolutionarily inverted habitat. To be physiologically healthy, for example, we have to override our instincts to eat more and rest. When it comes to publishing, we now face the challenge of imposing more constraint on the publishing process for the sake of leaner output and the longevity of our thoughts.
For me, this is where understanding the history of printing and typography has become a kind of cognitive asset. These realizations have made me resist automation a bit more and actually welcome friction in the creative processes necessary even for technical writing. It's also helped me justify spending more time, not less, in the artistic construction of mathematical formulas and the presentation of quantitative information in general.
Technological innovation, in the conventional sense, won't help us slow the publishing process back down. Slowing down requires better thought technology. It requires a willingness to draft for the sake of drafting. It requires throwing away most of what we think because most of our thoughts don't deserve to be read by others. Most of our thoughts are distractions—emotional sleights of the mind that trick us into thinking we care about something that we really don't—or that we understand something that we really don't.
Rather than trying to compress our workflows further, we need to factor the art of written expression back into thinking, writing, and publishing, with the latter being the hardest to achieve and worthy of only the purest thoughts and conclusions.
TeX is pronounced "tek" and is an English representation of the Greek letters τεχ, which is an abbreviation of τέχνη (or technē). Techne is a Greek concept that can mean either "art" or "craft," but usually in a the context of a practical application. ↩
One noteworthy TeX predecessor was eqn, a syntax that was designed to format equations for printing in troff, which was a system developed by AT&T Corporation for the Unix in the mid-1960s. The eqn syntax for mathematics notation has similarities with TeX, leading some to speculate that eqn influenced Knuth in his development of TeX. We do know that Knuth was aware of troff enough to have an opinion of it—and not a good one. See p. 349 of TUGBoat, Vol. 17 (1996), No. 4 for more. Thanks to Duncan Agnew for bringing troff to my attention and also pointing out that it was later replaced by groff, which writes PostScript and is included in modern Unix-based systems (even macOS) and can be found via the man pages. Remarkably, it can still take troff-based syntax developed in the 1970s and typeset it without any alterations. ↩
Knuth's philosophy that computer code should be as human-readable and as self-documenting as possible also lead him to develop literate programming, a pivotal contribution to computer programming that has impacted every mainstream programming language in use today. ↩
But I already paid for it?!?!
Subscription-based app pricing is a thorny issue that's far from resolved, but one of the very worst arguments I hear whenever a company like Ulysses switches to a subscription model goes something like this:
"Why do I have to pay (again) for software I've already purchased?"
This is a flat out lie that people usually create for themselves to help support their negative reaction to a perceived price increase. The lie basically says, "if I want to keep using this app, I need to pay for it again." In many cases, including the case with Ulysses, this is completely false. Ulysses clearly addresses this on their site:
The previous, single-purchase versions of Ulysses have both been removed from sale. They remain fully functional, of course, and we have even updated both versions for High Sierra and iOS 11 respectively. So, if you decide to keep using the "old" Ulysses, you should not encounter any problem. New features, however, will only be added to the subscription version in the future.
So there. The software you paid for is still "yours" in the sense that it is fully functional (as you paid for it) and will continue working indefinitely. You "own" it, and it's not going away.
Will it work forever? Hell no. Software isn't the same as a cast iron skillet. Software isn't going to work the same 100 years from now. It's probably not even going to work 100 weeks from now without being nursed through the vagaries of operating system updates, security patches, and user-expected support. When the developer of a cast iron skillet is done, they're done. When the developer of a piece of software is done, they're out of business—because if a developer quits, so does their product.
The more you can look at your software as a knowledge product—a product that rapidly decays without the service of its developers, the more subscription pricing makes sense objectively.
But that's the crux. Software needs human buyers, and our brains are poorly evolved to evaluate the many abstractions of our modern economy.
Dave, this conversation can serve no purpose anymore. Goodbye.
Via Hobo Signs:
An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.
The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents."
Our ability to think about abstract things makes us very different from other animals. It's why we have big heads, big philosophies, big religions, and, many times, big problems with absolutely no basis in the physical world.
We're in the middle of a really fascinating experiment in civilization that started around the time of Industrial Revolution, but really got going in the second half of the 20th century when computers (machines) enabled our abstract thinking to affect the physical world by significantly higher orders of magnitude.
We've already seen that mixing humans and advanced technology can have undesirable effects. The financial crisis of 2008 happened in large part because really smart people on Wall Street created financial structures that became too abstract for even their creators to fully understand—especially when set loose in the market to mix with human emotion and other financial structures.
The "good news" with failures of financial abstraction is that they can, apparently, be corrected by offsetting measures of abstraction like the creation of additional (abstract) money. Complicated financial structures also collapse when they are no longer believed in—like bad dreams.
AI is different in that it could very well evolve into something that surpasses DNA-based organisms. AI, once fully viable, may not collapse so easily, if at all.
A nod to checklists
Gabe whipped up a great list of checklist tools. My favorite aspect of his post is that there's no clear winner. There shouldn't be.
Checklists can come in all forms, and the ideal format depends entirely on the application. For me, checklists make sense when I need to see not only what needs to be done, but also what I've already done. Apps that automatically "vanish" completed tasks fail to do the latter.
For me, sometimes there's just no substitute for a spreadsheet for large checklists, especially if each item can have multiple statuses or dimensions. Sometimes adding more columns is way more efficient than adding more tasks (rows).
For packing lists, I've tried so many apps, but OmniOutliner is the best for me. It's simple checkbox feature is perfect, and I have several templates I use for different types of trips.
Sometimes an Apple Note will suffice, and sometimes I just "x" lines in Drafts for a quick and dirty grocery list.
When I'm working with large numbers of LaTeX files on my Mac, I use file colors, prefix schemes, and even moving files from one folder to another to keep what I've processed and what I haven't.
Checklists are as old as civilization and are one of the most fundamental ways to augment the human mind, which needs help seeing where it's been and where it needs to go. Everyone can benefit from checklists. Just check out The Checklist Manifesto.
A couple of Jekyll updates
Since moving to Jekyll last year, I've done relatively little to tweak the inner workings of this site. After all, one of the most appealing things about having a static site is that it doesn't need to have a lot of moving parts. It just works.
But today I finally got around to a couple of housekeeping items that have been on my list: image captions and MathJax.
For image captions, I settled on a beautifully simple solution posted by Andrew Wei on Stack Overflow:

*image_caption*
This takes advantage of the fact that you can create CSS for combinations of HTML elements. In this case, I can use
img + em { display: block; text-align: center;}
to target the *image_caption* text only and center it under images, which are also centered on this site by default. It works perfectly, and this isn't even Jekyll-specific. Anyone publishing in Markdown could do this.
Jekyll + MathJax
Adding MathJax took a little more time, but not much. It was worth it just to remind me of the brilliance of Jekyll's architecture. Even though the Jekyll site mentions MathJax, it doesn't say enough to be of immediate use. It basically points to a blog post that entails switching from the default kramdown Markdown converter to redcarpet. Given that I'm happy with kramdown and not in the mood to backtest a bunch of blog posts with a different converter, I wanted to stick with kramdown.
A series of subsequent web searches lead me to a Github issue thread for a Jekyll theme that I'm not even using, but I found a really efficient implementation of MathJax there by user "mmistakes," who suggested adding a mathjax variable in each page's YAML front matter that could be set to true on a post by post basis.
The elegance of this solution is that the MathJax script will only be written into the HTML of posts that actually have MathJax in them. This seemed super appealing to me because it meant that I didn't have to worry about MathJax being triggered by some accidental combination of characters in an old blog post.
I ended up adding
{% if page.mathjax %}
<script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-MML-AM_CHTML">
to my head.html file, which contains Ruby instructions for building the contents of each page's <head> element. For any page where the YAML front matter has mathjax: true, the MathJax script will be included. I decided to always include it in the site's index.html file, which shows recent posts. And going forward, I can simply include it in the YAML front matter of any individual post. For example, this post's front matter is:
layout: post
title: A couple of Jekyll updates
mathjax: true
I just finished up a project where I worked with MathJax a lot, and I continue to be impressed at how many LaTeX commands it handles. MathJax even has a special enclose library that handles special actuarial notation that eludes so many people. For example,
$$\require{enclose} {}_{17|}\ddot{a}_{x:\enclose{actuarial}{n}}^{(4)}$$
turns into:
\[\require{enclose} {}_{17|}\ddot{a}_{x:\enclose{actuarial}{n}}^{(4)}\]
Look but don't type
Even though Gabe and I sometimes have slightly differing views on the iPad's productivity value compared to the Mac, with his latest post, I think we are completely in sync—metaphorical Chris Farley falls and all.
In particular, he nails a massive friction point for me with the iPad:
I can type more comfortably on my iPhone than I can with my iPad Pro on the couch, in bed, or even just reclined in the backyard. I'm sure there's a good case out there that will solve this problem, but I'd rather see Apple solve it.
I've tried using various keyboards for my 13" iPad Pro, but I've never found one that let me comfortably type while sitting away from a flat table or desk. This has been a huge failure point from a practical perspective for me because the iPad, by design, begs to be used away from conventional "work stations." So the irony is that the only time I can do serious word creation on my iPad is while sitting at a desk or table.
If I want to escape those confines, which is a frequent want, I use my 13" MacBook Pro, which has the same form factor as the big-big iPad, but allows for lap typing.
Like Gabe, I also find myself in the funny position of using my iPhone to type more, even when my iPad is at hand. A great example: if I'm reading a book on my iPad, it's actually easier to write notes about the book using my iPhone. I even wrote this entire post in Drafts on my iPhone at the breakfast table. My iPad is in sight across the kitchen.
I still use the iPad a lot, but its use cases for typing still remain very limited for me. The Mac and iPhone just have superior keyboard forms.
It doesn't want anything
Tim Cook's entire commencement address to the MIT class of 2017 is an instant classic, but this is the part I want to echo forever:
Technology is capable of doing great things. But it doesn't want to do great things. It doesn't want anything. That part takes all of us. It takes our values and our commitment to our families and our neighbors and our communities. Our love of beauty and belief that all of our faiths are interconnected. Our decency. Our kindness.
I'm not worried about artificial intelligence giving computers the ability to think like humans. I'm more concerned about people thinking like computers without values or compassion, without concern for consequences. That is what we need you to help us guard against. Because if science is a search in the darkness, then the humanities are a candle that shows us where we've been and the danger that lies ahead.
Thinking out loud about the Apple Watch
PCs lead us indoors. Smartphones lead us into isolation. The Apple Watch is—sort-of—leading us back out into the real world again by encouraging movement, keeping phones in pockets, and most importantly, looking up again.
I've owned an Apple Watch since the Series 0 started shipped 26 months ago. I can't imagine ever not owning one again. Actually I can, but only when some higher form of "wearable" supersedes the form of a wrist watch.
If I were to say, "I'm more active because of the Apple Watch," a non-Watch person might say "You shouldn't need a watch to make you more active. After all, people were active for millennia without smart watches."
Well played armchair anthropologist, but that perspective overlooks the motionlessness of modernity. In the blink of an eye, humans have simply stopped moving. Being indoors with technology is too appealing, and our bodies,… on yeah, we still have bodies! We have all kinds of shit going on besides thumbs and eyes. Well, we should probably move the other parts around a little. Who knows—maybe even a lot.
In other words, we're poorly adapted for the environment we suddenly created for ourselves at the turn of the 21st century. But we are human. And we are nothing if not interested in solving the problems we create for ourselves. I think health-aware technology is a natural adaptation to health-hostile effects of generations one and two of personal computing.
Oh yeah… I just bought a new Apple Watch. More honestly, I just bought a new health-tracking wrist computer that's more waterproof so I don't have to take it off when I go swimming with my kids—the only time I've had to take off my original Apple Watch in the 26 months I've owned it.
Bigger picture, I've decided that if I'm going to have the benefits of technology that makes me sit still, I also need technology to counteract that. This is life, and these are not horrible problems to have to solve.
One-off shell script execution in BBEdit
I've been meaning to talk more about my "Sublime Text and BBEdit" workflow, and this is a powerful (if mundane) example.
I don't write a ton of scripts on my Mac and generally don't spend much time in Terminal. But when I need a script, I really need a script. My work is very file-heavy. I juggle large numbers of .tex (LaTeX) files across several large folder structures. Every so often I need to copy a subset of files within one folder to another folder so that I can do things to them.
In most cases, I have a list of the files in a plain text .tex file, and I just need to do some file operations on them. Selecting them individually in Finder is tedious and error-prone, so the solution is usually a simple bash script with a bunch of cp commands.
The main friction with bash scripting for folks like me that don't live in Terminal is that you have to create a .sh script and then web search for now to make it executable (because I can never remember the chmod command).
Creating the shell commands is straightforward, especially in Sublime Text. After adding the necessary #!/bin/bash line with TextExpander and defining the "to" and "from" file paths as variables, the magic of Sublime Text's multiple cursors makes it easy to put cp commands and the folder path variables around each file to be copied.
Sublime Text multiple cursors in action
Since Sublime Text won't execute these commands in an unsaved file that hasn't been given proper execute permissions, I simply copy this text into an untitled, unsaved BBEdit window and hit ⌘R. BBEdit has the extremely useful ability to immediately recognize the syntax and execute it right there on the spot.
I realize this post is a weird way to promote BBEdit, which is a powerful text editor that, in this case, I'm not even using as an editor. Hopefully I'll find some time to talk more about other ways I'm using it with Sublime Text.
Lessons from the president
Like many other parents, I've struggled to make sense of the current president in the context of parenthood. How do you even talk about it?
But I've realized that much can be taught by studying the commander-in-clown's example. After all he's the perfect anti-role model for young kids, teens, and adults of all ages: someone we should all aspire not to be when we grow up.
Here are the top five lessons we can learn from our president.
5. Things you put on social media are permanent.
If you tweet strong, emotionally charged opinions reflexively, they will almost certainly come back to haunt you.
What's better: Think before you act. Ask for advice. Do like Lincoln, and write an angry letter that you never send.
4. People who constantly attack others verbally are highly insecure.
The more they attack, the more they broadcast their insecurities and invite hate from others. Bullies seem a lot less intimidating the more you realize that they are more terrified of the world than you are of them.
What's better than being a bully: Promote things you truly believe in. Use positive reinforcement to advance just causes. If you need to be critical, support your position with facts, and don't contradict yourself or allow yourself to be distracted by things that trigger your insecurities.
3. It's OK to be wrong, but you have to admit it.
Everyone is wrong about something. Vulnerability is mightier than the strongest ego. It will win you the most loyal followers.
2. Very wealthy people who attempt to increase their wealth at all costs are not heroes of capitalism.
They are among the greatest cowards on earth. They live in constant fear of losing what they have and will never experience even basic happiness.
What's better: Use your excess to help others. Be in constant thanks for what you have rather than dwell on what you don't have. You will be happier and healthier.
1. Credibility and trust are vital ingredients of leadership.
A ruler who rules only by law will never be as effective as a leader who rules by trust. If no one trusts you, then you cannot trust anyone either. That's a perfect model for a miserable life—whether you are a clown, the leader of the free world, or both.
The boring truth about email security
David Sparks and John Gruber have said all that needs to be said about the revelation that Unroll.me was selling its users' email data.
It was easy for me to delete my Unroll.me account because I had really stopped looking at it already. Last year, I decided to just get out of the way of my email and just let Gmail's stock filters for "social," "promotions," and "updates" channel 80% of my email into those non-action buckets.
On the surface, it may seem odd that I would favor one ad company over another: dump Unroll.me but stay with Google's Gmail. A lot of people have ostensibly moved away from Gmail for the same reason people were throwing up in their mouths over Unroll.me this week.
But I have been using Gmail for a long time, and I have no plans to leave now. I understand that Google sees my email and pours it into its Alphabet soup, and I'm OK with that—not because I think Google is especially benevolent, but because I accept the truth about email data.
I think a lot of people who leave Gmail because of privacy concerns are following the false hope that another company can magically "secure" their email. The truth is that your email will never be totally private. With the exception of email you send to yourself, email takes at least two servers to tango.
Every copy of every email sent to/from you resides on some other email server. If you regularly email a specific person, there are probably thousands of your emails on their hard drive—perhaps the one in the old computer they just sold without wiping the hard drive.
In other words, email is not the same as your note archive or your document repository. Email is necessarily out there. Everywhere.
So in my mind, the solution to email privacy is email avoidance:
Take advantage of iMessage's encryption for chats with friends and family
Move your project or work communication to an app like Basecamp
That's what I've done. Today, I see my email as a bloated version of Twitter: a constant inflow of chaff with the occasional strand of wheat, which mostly takes the form of customer email.
I have no control over how many computers email me every day. But I can definitely control how much email I create myself.
Be still my rolling Pencil
If I'm using my iPad Pro, I'm almost always using my Apple Pencil, too. For me, the Pencil was a massive extension for the iPad and basically made it the go-to environment for reading, studying, and annotating PDFs.
The Apple Pencil is great at many things. Staying still on a flat table is not one of them.
I've tried several accessories and tricks for keeping the Pencil from racing away, but nothing works as well as the FRTMA Apple Pencil Magnetic Sleeve.
It makes the Pencil non-round, so it stays where you set it
It is extremely sleek, preserving the svelteness of the Pencil's design, yet the sleeve adds a bit of tackiness that I actually prefer when writing
It's magnetic, so it sticks to any iPad case
The magnet is very strong. When attached to an iPad case, you can shake the case really hard, and it will not come off. It will, however, come off sometimes when it's in my backpack, but in my experience, "losing" my Apple Pencil inside my backpack is the very best place to lose it—far better than seeing it race across a flat table and down the stairs of my favorite coffee shop.
From clips to stories
Renie Ritchie apparently wrote a treatise on Apple's new Clips app, but don't let that intimidate you. Clips is ridiculously easy to use, and most of its features are discoverable by just playing with it.
The real brilliance of Clips is that you don't even feel like you're doing movie editing, but that's exactly what you're doing. Being able to shoot video is just one step of making a visual story. A movie obviously can't exist without that step. But in my opinion, editing is way more important. Cutting, blending, and curating is what really makes something a story.
I think the "cutting" step is what most iPhone-created movies need the most. I went a long way in solving this problem (accidentally) when I started using Snapchat about a year ago. Before Snapchat, I shot plenty of video with the iPhone, but I almost never did anything with it. The main problem was that my videos tended to be too long. This made them:
Usually boring
Longer than most people wanted to watch
Too much of a hassle to upload due to their file size
So on my phone they sat—unwatched.
The more I used Snapchat for video, the more I realized the brilliance of its ten-second limitation. This constraint made it impossible to shoot long, boring videos and also forced me to throw away outtakes immediately. Before long, I wasn't just using Snapchat to send video snaps, I was saving the videos to my phone.
Now that Clips is here, I'm using the iPhone's camera app for video more often, but I'm still shooting very short duration clips a la Snapchat. Clips makes it ridiculously easy to fuse some or all of any video into a series of clips. Being able to mix videos and pictures into a single clip creates the same effect of a Snapchat story, but it keeps everything on my phone so that I can share it in other ways—notably with people who don't use Snapchat.
It's really the story you should be after.
If you pay attention to almost any TV show, movie, or professionally-made internet video, the very longest shots last no more than than five to eight seconds. In action movies, shot length can average as little as two seconds! Some action movies have over 3,000 shots in them. Changing scenes and angles just makes the visual aspect more engaging.
I used Clips to make a couple of short "movies," each consisting of 5–10 short videos and photos I took last week on a family vacation. In a lot of cases, I only grabbed a few of the best seconds of each clip. Creating each "movie" took just minutes using only my iPhone. I'm 100% sure none of those individual videos would have gotten shared if I hadn't used Clips to make them into a story.
Thoughts on iOS automation
It's funny to hear so many people complain about the lack of automation in iOS. In reality, iOS automation has already happened. We were just looking the other way, and when we turned around, we couldn't remember what was there before.
I can't think of a better measure of the success of automation than how quickly an automated process becomes forgotten. Automation's role in the human experience, after all, is to make us forget. Automation frees us to work on new problems beyond the old problem horizon. Automation paves over cavernous ravines, replacing them with short, straight paths to the adjacent possible.
There are countless examples of how iOS has done this. Take photography.
Before the PC, the steps to share pictures usually spanned weeks:
Remember to bring a camera with me
Take pictures on film
Physically deliver the film to a developer days later
Wait more days for the film to be developed
Physically pick up the developed photos
Physically mail the photos to someone, who would receive them days later
When digital photography and the PC arrived, the process shortened, and the output expanded:
Take a picture on a memory card
Remove the memory card from the camera and insert it into a PC
Upload to websites, instantly sharing with hundreds of people or more
Once the iPhone camera fully came of age, the steps became:
Pull out my phone and shoot
Tap to share
Weeks reduced to seconds. The need to bring a physical object with me: gone. The monetary cost of photography: eliminated. And in many cases, the quality of the final product: dramatically better.
The hassles of pre-iPhone photography: forgotten.
The adjacent possibilities unlocked by the confluence of the iPhone's camera and mobile connectivity:
Shareable HD video from anyone's pocket
FaceTime and other wireless video calling
Snapchat, or more generally, the concept of photo messaging
There are so many examples of other things iOS has automated that we never even thought needed automating. Just look at your home screen. The iPhone is essentially a universal remote for modern life.
Traditional computer automation (scripting, better inter-app communication, etc.) is a pretty narrow frontier of iOS automation yet to be fully solved. I'm not convinced that it even needs to be solved as long as we have traditional computers with open file systems. But I believe it will either be solved, or the need for solving it will be obviated by other advances in iOS.
For now, I will continue to enjoy using iOS and macOS, which are much greater together than they are apart. It is impossible to predict the future, but I'm pretty sure we can rule out a "single device for all uses" scenario.
Computers will continue to automate things we never associated with computers. We will continue looking for new problems. And we will continue forgetting about the tedium of times gone by.
And, not or
During Sal Soghoian's appearance on Mac Power Users, he talks about his philosophy on "and, not or":
A lot of people mistakenly embrace the concept of or when it's not necessary. There really needs to be and. And and doesn't necessarily cost more… it just offers more.
Every minute of this show is worth listening to because Sal exudes genius every time he speaks, but his "and, not or" philosophy is a seriously great piece of wisdom, and I hope that now that Sal is outside of Apple he has more opportunity to speak and write about it.
In my experience—observing both myself and others—the "or" mindset usually leads to paralysis or unneeded time spent rebuilding an entire workflow to fit perfectly in a new framework.
The most agile, modular solution is usually some of this and that. "Or" breeds an "all or nothing" approach that usually just ends in nothing. "And" moves things forward
What's the best computer for productivity? A MacBook and an iPad Pro.
Where should I store notes? DEVONthink and Apple Notes.
Where should I write? Drafts and Ulysses and Sublime Text.
What's the best way to outline? Plain text and iThoughts and OmniOutliner.
What camera should I use? A DSLR and my iPhone.
What's the best way to sketch a visual design concept? Real paper and an Apple Pencil / iPad.
Where do tasks belong? OmniFocus and Reminders and TaskPaper.
What PDF app should I use for mark up? PDF Expert and LiquidText and Notability.
In each case, the "and" mindset lets my mind get out of the way of itself. "And" imposes less friction between ideas and actions.
Newer | Older
Practically Efficient is written by Eddie Smith
eddie_smith
PractEff | CommonCrawl |
Principles of the WPIA and its significance
S-WPIA implemented on the Arase satellite
Software-type Wave–Particle Interaction Analyzer on board the Arase satellite
Yuto Katoh1Email authorView ORCID ID profile,
Hirotsugu Kojima2,
Mitsuru Hikishima3,
Takeshi Takashima3,
Kazushi Asamura3,
Yoshizumi Miyoshi4,
Yoshiya Kasahara5,
Satoshi Kasahara6,
Takefumi Mitani3,
Nana Higashio7,
Ayako Matsuoka3,
Mitsunori Ozaki5,
Satoshi Yagitani5,
Shoichiro Yokota8,
Shoya Matsuda4,
Masahiro Kitahara1 and
Iku Shinohara3
Earth, Planets and Space201870:4
Accepted: 25 December 2017
Published: 8 January 2018
We describe the principles of the Wave–Particle Interaction Analyzer (WPIA) and the implementation of the Software-type WPIA (S-WPIA) on the Arase satellite. The WPIA is a new type of instrument for the direct and quantitative measurement of wave–particle interactions. The S-WPIA is installed on the Arase satellite as a software function running on the mission data processor. The S-WPIA on board the Arase satellite uses an electromagnetic field waveform that is measured by the waveform capture receiver of the plasma wave experiment (PWE), and the velocity vectors of electrons detected by the medium-energy particle experiment–electron analyzer (MEP-e), the high-energy electron experiment (HEP), and the extremely high-energy electron experiment (XEP). The prime objective of the S-WPIA is to measure the energy exchange between whistler-mode chorus emissions and energetic electrons in the inner magnetosphere. It is essential for the S-WPIA to synchronize instruments to a relative time accuracy better than the time period of the plasma wave oscillations. Since the typical frequency of chorus emissions in the inner magnetosphere is a few kHz, a relative time accuracy of better than 10 μs is required in order to measure the relative phase angle between the wave and velocity vectors. In the Arase satellite, a dedicated system has been developed to realize the time resolution required for inter-instrument communication. Here, both the time index distributed over all instruments through the satellite system and an S-WPIA clock signal are used, that are distributed from the PWE to the MEP-e, HEP, and XEP through a direct line, for the synchronization of instruments within a relative time accuracy of a few μs. We also estimate the number of particles required to obtain statistically significant results with the S-WPIA and the expected accumulation time by referring to the specifications of the MEP-e and assuming a count rate for each detector.
Radiation belts
Whistler-mode chorus
Wave–particle interactions
The Arase (ERG) satellite was launched from the Uchinoura Space Center on December 20, 2016 to explore the dynamics of the terrestrial radiation belts. One of the prime objectives for this satellite mission is the investigation of the energization process of relativistic electrons by whistler-mode chorus emissions. Whistler-mode chorus emissions are coherent electromagnetic plasma waves observed mainly on the dawn side of the inner magnetosphere (e.g., Summers et al. 1998). Previous studies showed that chorus emissions play crucial roles in the reformation of the outer radiation belt during the recovery phase of geomagnetic storms (e.g., Miyoshi et al. 2003). Recent theoretical and simulation studies have revealed that chorus emissions emerge from a band of whistler-mode waves in regions close to the magnetic equator through nonlinear wave–particle interactions (e.g., Katoh and Omura 2007, 2011, 2013, 2016; Omura et al. 2008, 2009). Chorus emissions propagate away from the equator, and their propagation characteristics vary depending on the plasma environment in the inner magnetosphere (e.g., Katoh 2014). In the generation process of chorus emissions, an electromagnetic electron hole is formed in a specific range of the velocity phase space due to the nonlinear Lorentz force acting on resonant electrons. Simulation studies have revealed that most resonant electrons lose their kinetic energy, contributing to the generation of chorus emissions, and that a fraction of the resonant electrons is trapped inside the hole and is effectively energized through a special form of nonlinear wave trapping called relativistic turning acceleration (Omura et al. 2007) and ultra-relativistic acceleration (Summers and Omura 2007).
Wave–particle interactions in the magnetosphere occur over the timescale of the characteristics of plasma waves and particles. During the interaction between coherent whistler-mode waves and energetic electrons, relaxation of the velocity distribution function of resonant electrons occurs within hundreds or thousands of electron gyro-periods (e.g., Katoh and Omura 2004), corresponding to tens of ms for typical parameters of the Earth's inner magnetosphere. Since the time resolution of conventional plasma instruments on board a spacecraft is usually a few tens of ms or less, it is difficult to measure the relaxation of the velocity distribution or the energy exchange between wave and particles.
To overcome the difficulty in the direct measurement of wave–particle interactions, previous studies have used the observed wave phase as a reference to count the number of particles in order to obtain the distribution as a function of the relative phase angle between waves and particles (Ergun et al. 1991, 1998; Gough et al. 1995; Buckley et al. 2000). In sounding rocket experiments, their attempts successfully identified wave–particle correlations between Langmuir waves and electrons, with a statistical significance (Kletzing et al. 2017). Fukuhara et al. (2009) proposed a new type of instrument for the direct and quantitative measurement of the energy exchange between waves and particles, which is referred to as the Wave–Particle Interaction Analyzer (WPIA). The WPIA uses the three components of observed waveforms and particle velocity vectors to quantify the energy flow by measuring the inner product of the observed instantaneous wave and velocity vectors, corresponding to Joule heating of particles by plasma waves (Katoh et al. 2013). The feasibility of the WPIA for the Arase satellite has been studied using pseudo-observations based on simulations with self-consistent plasma particle codes, which reproduce the process of chorus generation (Katoh et al. 2013; Hikishima et al. 2014). Kitahara and Katoh (2016) suggested that the WPIA is also capable of measuring the pitch angle scatter of particles by plasma waves directly and quantitatively. Recently, Shoji et al. (2017) showed that the WPIA can directly measure the formation of an ion hole through interactions of electromagnetic ion cyclotron waves and energetic ions in the inner magnetosphere.
In this paper, the implementation of the Software-type Wave–Particle Interaction Analyzer (S-WPIA) on the Arase satellite is described. Since the Arase satellite is the first application of the WPIA in space, we installed the Software-type WPIA because of its flexibility in choosing processing algorithms and optimization. The principles and significance of the WPIA are discussed in "Principles of the WPIA and its significance" section. Details of the S-WPIA implementation in the Arase satellite are described in "S-WPIA implemented on the Arase satellite" section, and a summary is present in "Summary" section.
The WPIA proposed by Fukuhara et al. (2009) uses the three components of observed waveforms and particle velocity vectors. The WPIA quantifies the energy flow by measuring the inner product of the observed instantaneous wave electric field and velocity vectors, E and v, which is the time variation of the kinetic energy of a charged particle and is given by
$$W = \frac{{{\text{d}}K}}{{{\text{d}}t}} = m_{0}\varvec{\upsilon}\cdot \frac{{{\text{d}}(\gamma\varvec{\upsilon})}}{{{\text{d}}t}} = q\varvec{E} \cdot\varvec{\upsilon},$$
where K = m0c2(γ − 1) is the kinetic energy of a charged particle including relativistic effects, m0 and q are the rest mass and charge of a particle, respectively, c is the speed of light, and γ is the Lorentz factor. According to Katoh et al. (2013), the net variation of the kinetic energy of charged particles, ΔW(r, t), during a time interval Δt is given by
$$\Delta W(\varvec{r},t) = \int_{t}^{t + \Delta t} {} \iiint {q\varvec{E}(\varvec{r},t^{\prime } )} \cdot\varvec{\upsilon}f(\varvec{r},\varvec{\upsilon},t^{\prime } )d\varvec{\upsilon}dt^{\prime } ,$$
where f is the phase space density of charged particles. Since the measurement of f is performed at discrete times, ΔW(r, t) is discretized as a summation of W(t i ) = qE(t i )·v i measured over a time interval Δt, as follows:
$$\Delta W(\varvec{r},t) \simeq \sum\limits_{i = 1}^{N} {q\varvec{E}\text{(}t_{i} ) \cdot\varvec{\upsilon}_{i} = \sum\limits_{i = 1}^{N} {W(t_{i} )} ,}$$
where \(t \le t_{i} \le t + \Delta t\), N represents the number of particles detected during the time interval Δt, t i is the detection time for the i-th particle, E(t i ) is the wave electric field vector at t i , and v i is the velocity vector for the i-th particle. Since W(t i ) represents the gain or the loss of the kinetic energy of the i-th particle, the net amount of the energy exchange in the region of interest is obtained by summing W for all detected particles, where \(W_{\text{int}} = \sum\nolimits_{i = 1}^{N} W(t_i)\). Figure 1 shows a schematic diagram of W and Wint as measured by the S-WPIA for interactions between energetic electrons and whistler-mode waves propagating purely parallel to the background magnetic field (after Katoh et al. 2013), where Ew and Bw are the wave electric and magnetic field vectors, respectively, and v⊥ is the perpendicular component of the velocity vector of a particle. The sign of W is determined by the relative phase angle (θ) between Ew and v⊥ (Fig. 1a, b), and the net energy exchange between particles and waves can be evaluated by summing W for all N particles to obtain Wint (Fig. 1c). By representing the numbers of energetic electrons having positive and negative W by N+ and N−, respectively, it is expected that N+ and N− would be significantly different from each other in the region of efficient wave–particle interactions. Figure 1c indicates the case of an efficient wave–particle interaction resulting in a wave generation, where N− is larger than N+, rendering Wint negative. Alternatively, if the difference (δN) between N+ and N− is negligible, Wint approaches zero and no net energy exchange would occur. Since a finite number of particles are used in the computation of δN and Wint, there is a fluctuation over time. The fluctuation originates from the thermal fluctuation of the distribution of energetic electrons as well as the fluctuation of both wave electric field amplitude and relative phase angle θ. We use the standard deviation σW, which is computed by:
$$\sigma_{W} = \sqrt {\sum\limits_{i = 1}^{N} {(q\varvec{E}_{W} (t_{i} ) \cdot\varvec{\upsilon}_{i} )^{2} - \frac{1}{N}\left( {\sum\limits_{i = 1}^{N} {q\varvec{E}_{W} (t_{i} ) \cdot\varvec{\upsilon}_{i}} } \right)^{2}} } ,$$
where the first and second terms on the right-hand side correspond to the width and the center of the qEW(t i )·v i distribution, respectively, to evaluate the statistical significance of the obtained Wint compared to the fluctuation. We can identify an efficient energy exchange between waves and particles when a Wint that is sufficiently larger than the σW is obtained by the S-WPIA. In other words, a sufficient number of particles need to be collected for the computation of Wint, so that the obtained Wint exceeds σW and achieves the required statistical significance, assuming a Gaussian distribution of 1.64 σW for a statistical significance of 90% and 1.96 σW for a 95% scenario. For the case in which a sufficiently large number of particles is expected in the S-WPIA, Wint can be evaluated for different kinetic energy (K) and pitch angle (α) ranges to obtain Wint(K, α). By examining the obtained Wint(K, α), we can identify the specific energy and pitch angle ranges that mostly contribute to the energy exchange through wave–particle interactions. In this case, σW(K, α) should also be computed for the evaluation of the statistical significance of the obtained Wint(K, α).
Schematic diagram of the S-WPIA for measuring interactions between energetic electrons and whistler-mode waves propagating purely parallel to the background magnetic field (after Katoh et al. 2013). Panels a and b represent the relation between the perpendicular component of the velocity vector of an electron (v⊥) and the wave electric (Ew) and magnetic (Bw) field vectors in the cases of a W < 0 and b W > 0. c Distribution of energetic electrons as a function of W, corresponding to a negative Wint case representing wave generation. The total number of energetic electrons is N0, while N+ and N− are the numbers of energetic electrons having positive and negative W, respectively
Specifications of instruments on board the Arase satellite for implementing the S-WPIA
For the WPIA, it is essential to ascertain that the time resolution of t i , indicating the detection time for the i-th particle, is shorter than the timescale for the wave–particle interactions. For the S-WPIA on board the Arase satellite, the requirement of the relative time accuracy for each instrument used in the direct measurement of interactions between the chorus and energetic electrons in the inner magnetosphere is studied. The relative phase angle between the electromagnetic field vector for the wave (Ew and Bw) and the velocity vector v⊥ for the energetic electrons should be resolved in order to identify the sign of W correctly for each detected electron. Here, θ represents the relative phase angle between Ew and v⊥ (Fig. 1a, b), and ζ denotes the angle between Bw and v⊥. In addition, identifying of the presence of an electromagnetic electron hole in the velocity phase space is one of the primary goals of the S-WPIA. While the hole is formed in the specific range of ζ (e.g., Omura et al. 2008; Katoh et al. 2013), which rotates in time with the wave period, the wave phase variation needs to be resolved on a timescale that is sufficiently shorter than the wave period. In the inner magnetosphere, chorus emissions appear in a frequency range lower than the electron cyclotron frequency: typically, from 0.2 to 0.5 Ωe0 for the lower band chorus and from 0.5 to 0.8 Ωe0 for the upper band chorus, where Ωe0 is the electron gyrofrequency at the magnetic equator. Assuming 10 kHz as the highest electron cyclotron frequency along the Arase orbit at the equator, the wave period of the chorus is approximately 100 μs. An accuracy greater than 10 μs resolves the wave phase on the order of a few tens of degrees. The same accuracy should be utilized for the synchronization between wave and particle instruments in order to identify θ and ζ correctly.
The instruments on board the Arase satellite meet the requirements for direct measurements of interactions between chorus and energetic electrons by the S-WPIA. Chorus emissions are often observed on the dawn side of the inner magnetosphere and outside the plasmapause. The typical frequency range of chorus emissions is covered by the waveform capture receiver (WFC) of the plasma wave experiments (PWE) on board the Arase satellite (Kasahara et al. 2018a). Furthermore, since the ratio between the plasma frequency (fp) and the electron cyclotron frequency (fce), fp/fce, is typically less than 10, the minimum resonance energy based on the first-order cyclotron resonance condition is estimated to be in the energy range of hundreds of eV to a few keV for the upper band chorus and from a few keV to tens of keV for the lower band chorus, respectively. The resonance energy changes depending on the pitch angle of the resonant electrons and increases to over MeV for large pitch angle ranges. These estimations show that the kinetic energy range of resonant electrons, particularly for the lower band chorus, is covered by the medium-energy particle experiments (MEP-e) (Kasahara et al. 2018b), the high-energy electron instruments (HEP) (Mitani et al. submitted to Earth, Planets and Space), and the extremely high-energy electron experiment (XEP) (Higashio et al. submitted to Earth, Planets and Space) on board the Arase satellite.
Estimation of the required integration time for the S-WPIA
For the direct measurements of wave–particle interactions by the S-WPIA, a certain number of particles detected in the region of interest need to be collected in order to obtain a statistically significant Wint and/or a non-uniform distribution of particles in the wave phase space caused by the presence of an electromagnetic electron hole. Assuming that the distribution of energetic electrons as a function of ζ is changed by 10% from the average due to the presence of an electromagnetic electron hole and that the statistical fluctuation follows a Poisson distribution for which the standard deviation is expressed as N1/2/N for a particle count N, at least more than 100 particles need to be collected in each bin. If the distribution of particles as a function of relative phase angle ζ is analyzed at every 30°, i.e., if 12 bins are assumed for ζ from 0° to 360°, then the collection of 1200 particles would be required to assess each of the kinetic energy and pitch angle ranges.
By referring to the specifications of the MEP-e (Kasahara et al. 2018b), the number of particles required for the S-WPIA required to obtain a statistically significant Wint is estimated. The MEP-e measures electrons in the energy range of 5–80 keV using 16 sensor channels, where each sensor has an angular resolution of 5° for both elevation and azimuth angles. By estimating the expected particle counts for the MEP-e, the observation conditions for the Arase satellite are assumed to be as follows: (1) the background magnetic field is perpendicular to the spin axis of the Arase satellite, and (2) the number of energy steps for the MEP-e is 16, swept four times every second. Since the field-of-view (FOV) for each sensor channel of the MEP-e changes with time due to the satellite spin, the FOV and the corresponding pitch angle for each sensor channel, as well as the energy step for the MEP-e measurement during one spin, are computed as shown in Fig. 2. Figure 2a shows the pitch angle measured by four sensor channels illustrated by colored rectangles in the upper panel, where the same color is used for both lines in Fig. 2a and rectangles indicating the FOV of the corresponding sensor channel. The pitch angle measured by each sensor channel changes in time due to the satellite spin. The coverage of the pitch angle is different depending on the direction of the FOV with respect to the background magnetic field. The energy range measured by each sensor channel also varies in time, as shown in Fig. 2b. Since MEP-e sweeps 16 energy step every 0.25 s, the energy and the pitch angle measured by each sensor vary accordingly. By referring the observation sequence indicated by Fig. 2a, b, we compute the expected count rate during one spin period as a function of both energy and pitch angle of electrons. The flux of incoming energetic electrons to the MEP-e is assumed to be uniform in both time and space during one spin period with a count rate of 5000 counts per second (cps) for each sensor channel. Figure 2c shows the estimated particle count as a function of the energy steps and pitch angle bins, where the width of each pitch angle bin is assumed to be 5°. The estimation shows that a particle count greater than 2000 can be expected in the wide pitch angle range from 60° to 120° in all the kinetic energy range covered by the MEP-e.
a Variation of the pitch angle at the center of the field-of-view for sensor channels of the MEP-e during one spin period of 8 s under the assumed condition. Schematics shown in the upper panel represent the FOV of each sensor channel every 2 s, where the color of rectangles corresponds to those of plotted lines. b Energy range measured by MEP-e during one spin period, where 16 energy steps are swept every 0.25 s. c Estimated number of counts measured by the MEP-e during one spin period
The required time interval for the S-WPIA based on the estimation shown in Fig. 2c is evaluated. If the required number of particles is set at 2000 as estimated earlier, the required particle count can be collected by MEP-e within one spin period in the pitch angle range from 60° to 120°. However, additional restrictions and limitations should be taken into account for the S-WPIA. If the number of particles required in order to increase the statistical significance of the obtained results is set at 12,000, the accumulation time should be greater than six spin periods in the pitch angle range from 60° to 120°. In addition to using a large particle count to achieve statistical significance, in order to increase the signal-to-noise ratio for the S-WPIA, the count at the time of the whistler-mode chorus enhancements should be used. We expect that both the net increase of Wint and modulation of the particle distribution as a function of the relative phase angle ζ can only be measured in the presence of chorus emissions. Considering that the statistical fluctuation of the particle count is expressed as N1/2/N, the particle count detected in the absence of chorus emissions only increases the statistical fluctuations without increasing the amount of the modulation due to wave–particle interactions. By selecting the interval of chorus emissions, we expect that the detected count will increase both N1/2/N and the amount of the modulation of the distribution, and therefore, we expect the signal-to-noise ratio to increase. Since chorus elements appear in the spectra intermittently with a timescale of less than 1 s, it can be roughly assumed that one-third of the detected particles are accompanied by chorus elements. Taking these assumptions into account, the required accumulation time for the S-WPIA is estimated to be at least 18 spin periods, corresponding to 144 s. The expected duration of the S-WPIA measurements is more than 3 min in the region of interest, and this expectation is considered in the operation planning for the Arase satellite.
In order to realize the S-WPIA in the Arase satellite, a dedicated mission network system for the synchronization of wave and particle instruments was developed. In this section, the implementation of the S-WPIA on the Arase satellite is described.
Mission network based on the Spacewire
All the scientific instruments connect to the mission network through dedicated CPU boards, which are digital processing boards designed specifically for the Arase mission. The mission network is a communication system based on the Spacewire standardization (ECSS-E-ST-50-12C 2008; ECSS-E-ST-50-51C 2010; ECSS-E-ST-50-52C 2010). The scientific instruments communicate with each other and transfer the observed data to the mission data recorder (MDR) (Takashima et al. submitted to Earth, Planets and Space). The MDR is composed of a CPU with 128 MB of SD-RAM and a 32-GB flash memory. The flash memory in the MDR is dedicated to the storage of the data related to the S-WPIA and the PWE burst mode (Kasahara et al. 2018a). The S-WPIA application software, which runs on the MDR, executes WPIA calculations and manages the data flow on the MDR (Hikishima et al. submitted to Earth, Planets and Space). Figure 3 shows the configuration of the mission network. Note that the components in the relation to the S-WPIA are shown in this figure. While the XEP, HEP, MEP-e, and MGF connect to the mission network through their own set of the CPU boards, the PWE connects through two sets of CPU boards, one of which is dedicated to the data management for the electric field channels, and the other of which is dedicated to the data management for the magnetic field channels.
Configuration of the mission network among the cooperated instruments of the S-WPIA
Each scientific instrument writes the observed data into the MDR in the S-WPIA data format through the mission network. The communication among the scientific instruments is conducted by a relay packet. The data packed in the relay packet are transferred by each instrument according to the routing information, which is decided in advance by commands. Through the relay packet, the S-WPIA activates the generation of the data designated to the S-WPIA for each instrument. Each instrument reports its readiness and generation statuses for the S-WPIA data through the relayed packet. The total bandwidth of the mission network is 12 Mbps, and a specific bandwidth is allocated to each instrument based on the bit rate of the data generation by commands.
Accuracy of relative observation time
As stated in "Principles of the WPIA and its significance" section, the application of the S-WPIA to the chorus emission requires an accuracy of at least 10 μs for the relative observation time among the plasma wave receiver and the particle instruments. Since the observation times for each of the instruments are not synchronized with each other, a standard clock is generated and distributed by the PWE through the exclusive lines between the PWE and each particle instrument, in order to maintain the accuracy of the relative observation times among the instruments. This clock is called the S-WPIA clock and is configured by dividing the source clock of the PWE. The clock frequency for the S-WPIA clock is 524,288 Hz, which is equivalent to a resolution of 1.907 μs (Fig. 4). Since the sampling frequency for waveforms in the PWE is generated from the same source clock as that for the S-WPIA clock, the waveform sampling time is completely synchronized with the time as measured by the S-WPIA clock. Each instrument introduces a counter called the S-WPIA counter, which accumulates counts from the S-WPIA clock. The time index (satellite time) is generated with a time resolution of 15.6 ms and is distributed by the satellite system. The S-WPIA counter is a 24-bit counter that resets to zero with each distribution of the time index. Thus, the relative observation time is guaranteed by the combination of the time index and the S-WPIA counter with an accuracy of 1.907 μs (Fig. 4). Each instrument writes its own observation data attached to the information of the satellite time and the S-WPIA counter.
Schematic diagram of the synchronization of instruments aboard the Arase satellite
Operation and output of the S-WPIA
The S-WPIA computes Wint from the observational data of the PWE, XEP, HEP, MEP-e, and MGF stored on the MDR. Because of the vast amounts of observed raw data for electromagnetic waveforms and individual particle counts, the S-WPIA measurement is intermittent and has a short duration for each orbit of the Arase satellite. First, we set the command to activate the generation of the raw data for each instrument only in the region of interest. After the observation, by referring to quick-look plots of the PWE, we determine the time interval subject to the computation of the S-WPIA. Then, we set the command for the computation of the stored data observed in the time interval to obtain Wint, σW, and N as functions of K, α, and ζ. The output of the S-WPIA, Wint(K, α, ζ), σW(K, α, ζ), and N(K, α, ζ), is transferred to the ground, and the raw data used for the S-WPIA output can also be downlinked for verification and investigation of the S-WPIA algorithm. Details of the S-WPIA calculation and the specifications of the S-WPIA software applications are described in Hikishima et al. (submitted to Earth, Planets and Space).
In this report, the principle of the WPIA (Fukuhara et al. 2009) and its significance for the direct measurements of wave–particle interactions in the Arase mission were described. The WPIA computes an inner product W(t i ) = qE (t i )·v i , where t i is the detection time for the i-th particle, E(t i ) is the wave electric field vector at t i , and q and v i represent the charge and the velocity vectors for the i-th particle, respectively. Since W(t i ) denotes the gain or the loss of the kinetic energy for the i-th particle, summing W for detected particles allows the net amount of the energy exchange in the region of interest to be acquired. By referring to the specifications of the MEP-e and by assuming a count rate of 5000 cps for each detector of the MEP-e, we estimated that the number of particles required to obtain statistically significant results by the S-WPIA can be collected during 18 spin periods.
The implementation of the S-WPIA on the Arase satellite is next described. The S-WPIA was installed on the Arase satellite as a software function running on the mission data processor. It uses an electromagnetic field waveform measured by the WFC of the PWE and velocity vectors detected by the MEP-e, HEP, and XEP. The primary goal of the S-WPIA is measuring the energy exchange between the whistler-mode chorus emissions and energetic electrons in the inner magnetosphere. It is essential for the S-WPIA to synchronize instruments with a relative time accuracy that is better than the time period of the plasma wave oscillations. Since the typical frequency of chorus emissions is a few kHz in the inner magnetosphere, a relative time accuracy better than 10 μs should be maintained in order to measure the relative phase angle between wave electromagnetic field and velocity vectors with an accuracy sufficient to correctly detect the sign of W. In the Arase satellite, a dedicated system has been developed in order to obtain the required time resolution for inter-instrument communication. Both the time index distributed to all instruments through the satellite system with a time resolution of 15.6 ms and the S-WPIA clock signal, which is distributed from the PWE every 1.9 μs to particle instruments through a direct line, are used. The S-WPIA has been successfully implemented on the Arase satellite with instrument specifications and mission networks suitable for the direct measurement of interactions between chorus and energetic electrons in the inner magnetosphere. The S-WPIA software on board the Arase satellite is described in detail in an accompanying paper by Hikishima et al. (submitted to Earth, Planets and Space).
YK and MK contributed theoretical consideration and data analysis. HK, MH, TT, and KA contributed discussion of the implementation. YM, YK, SK, TM, NH, AM, MO, AY, SY, SM, and IS contributed discussion of specifications of instruments.
The authors express their sincere gratitude for numerous efforts made by all members of the ERG project. This study is supported by Grants-in-Aid for Scientific Research (23224011, JP15H05747, JP15H05815, JP15H03730, JP16H06286, and 17K18798) of Japan Society for the Promotion of Science. This work was also supported by Toray Science and Technology Grant of Toray Science Foundation. This work was carried out by the joint research program of the Institute for Space-Earth Environmental Research (ISEE), Nagoya University. The authors wish to express their sincere appreciations to Emeritus Professor Takayuki Ono for valuable discussion and continuous encouragement on this study.
The data used in this paper can be obtained upon request to the corresponding author.
Department of Geophysics, Graduate School of Science, Tohoku University, 6-3 Aramaki-aza-aoba, Aoba, Sendai Miyagi, 980-8578, Japan
Research Institute for Sustainable Humanosphere, Kyoto University, Gokasho, Uji Kyoto, 611-0011, Japan
ISAS/JAXA, Sagamihara Kanagawa, 229-8510, Japan
Institute for Space-Earth Environmental Research, Nagoya University, Nagoya Aichi, 464-8601, Japan
Graduate School of Natural Science and Technology, Kanazawa University, Kakuma Kanazawa, 920-1192, Japan
Graduate School of Science, The University of Tokyo, Bunkyo-ku Tokyo, 113-0033, Japan
RDD, JAXA, Tsukuba Ibaraki, 305-8505, Japan
Osaka University, Toyonaka 560-0043, Japan
Buckley AM, Gough MP, Alleyne H, Yearby K, Willis I (2000) Measurement of wave–particle interactions in the magnetosphere using the DWP particle correlator. In: Proceedings of cluster-II workshop, pp 303–306Google Scholar
Ergun RE, Carlson CW, McFadden JP, Clemmons JH, Boehm MH (1991) Langmuir wave growth and electron bunching: results from a wave–particle correlator. J Geophys Res 96:225–238View ArticleGoogle Scholar
Ergun RE, McFadden JP, Carlson CW (1998) Wave–particle correlator instrument design. Meas Tech Space Plasmas Part AGU Geophys Monogr 102:325–331Google Scholar
European cooperation for space standardization (ECSS-E-ST-50-12C) (2008) Space engineering, SpaceWire-Links, nodes, routers and networks, European Space AgencyGoogle Scholar
European cooperation for space standardization (ECSS-E-ST-50-51C) (2010) Space engineering, SpaceWire protocol integration, European Space AgencyGoogle Scholar
European cooperation for space standardization (ECSS-E-ST-50-52C) (2010) Space engineering, SpaceWire-Remote memory access protocol, European Space AgencyGoogle Scholar
Fukuhara H, Kojima H, Ueda Y, Omura Y, Katoh Y, Yamanaka H (2009) A new instrument for the study of wave–particle interactions in space: one-chip Wave–Particle Interaction Analyzer. Earth Planets Space 61:765–778. https://doi.org/10.1186/BF03353183View ArticleGoogle Scholar
Gough MP, Hardy DA, Oberhardt MR, Burke WJ, Gentile LC, McNeil B, Bounar K, Thompson DC, Raitt WJ (1995) Correlator measurements of megahertz wave–particle interactions during electron beam operations on STS. J Geophys Res 100:21561–21575View ArticleGoogle Scholar
Hikishima M, Katoh Y, Kojima H (2014) Evaluation of waveform data processing in Wave–Particle Interaction Analyzer. Earth Planets Space 66:63. https://doi.org/10.1186/1880-5981-66-63View ArticleGoogle Scholar
Kasahara Y, Kasaba Y, Kojima H, Yagitani S, Ishisaka K, Kumamoto A, Tsuchiya F, Ozaki M, Matsuda S, Imachi T, Miyoshi Y, Hikishima M, Katoh Y, Ota M, Shoji M, Matsuoka A, Shinohara I (2018a) The plasma wave experiment (PWE) on board the Arase (ERG) Satellite. Earth Planets Space. https://doi.org/10.1186/s40623-017-0759-3.Google Scholar
Kasahara S, Yokota S, Mitani T, Aasamura K, Hirahara M, Shibano Y, Takashima T (2018b) Medium-Energy Particle experiments - electron analyser (MEP-e) for the Exploration of energization and Radiation in Geospace (ERG) mission. Earth Planets Space. https://doi.org/10.1186/s40623-017-0752-x
Katoh Y (2014) A simulation study of the propagation of whistler-mode chorus in the Earth's inner magnetosphere. Earth Planets Space 66:6. https://doi.org/10.1186/1880-5981-66-6View ArticleGoogle Scholar
Katoh Y, Omura Y (2004) Acceleration of relativistic electrons due to resonant scattering by whistler mode waves generated by temperature anisotropy in the inner magnetosphere. J Geophys Res 109:A12214. https://doi.org/10.1029/2004JA010654View ArticleGoogle Scholar
Katoh Y, Omura Y (2007) Computer simulation of chorus wave generation in the Earth's inner magnetosphere. Geophys Res Lett 34:L03102. https://doi.org/10.1029/2006GL028594Google Scholar
Katoh Y, Omura Y (2011) Amplitude dependence of frequency sweep rates of whistler mode chorus emissions. J Geophys Res 116:A07201. https://doi.org/10.1029/2011JA016496Google Scholar
Katoh Y, Omura Y (2013) Effect of the background magnetic field inhomogeneity on generation processes of whistler-mode chorus and broadband hiss-like emissions. J Geophys Res Space Phys 118:4189–4198. https://doi.org/10.1002/jgra.50395View ArticleGoogle Scholar
Katoh Y, Omura Y (2016) Electron hybrid code simulation of whistler-mode chorus generation with real parameters in the Earth's inner magnetosphere. Earth Planets Space 68:192. https://doi.org/10.1186/s40623-016-0568-0View ArticleGoogle Scholar
Katoh Y, Kitahara M, Kojima H, Omura Y, Kasahara S, Hirahara M, Miyoshi Y, Seki K, Asamura K, Takashima T, Ono T (2013) Significance of Wave–Particle Interaction Analyzer for direct measurements of nonlinear wave–particle interactions. Ann Geophys 31:503–512. https://doi.org/10.5194/angeo-31-503-2013View ArticleGoogle Scholar
Kitahara M, Katoh Y (2016) Method for direct detection of pitch angle scattering of energetic electrons caused by whistler mode chorus emissions. J Geophys Res Space Phys. https://doi.org/10.1002/2015JA021902Google Scholar
Kletzing CA, LaBelle J, Bounds SR, Dolan J, Kaeppler SR, Dombrowski M (2017) Phase sorting wave–particle correlator. J Geophys Res Space Phys 122:2069–2078. https://doi.org/10.1002/2016JA023334Google Scholar
Miyoshi Y, Morioka A, Obara T, Misawa H, Nagai T, Kasahara Y (2003) Rebuilding process of the outer radiation belt during the 3 November 1993 magnetic storm: NOAA and EXOS-D observations. J Geophys Res 108(A1):1004. https://doi.org/10.1029/2001JA007542View ArticleGoogle Scholar
Omura Y, Furuya N, Summers D (2007) Relativistic turning acceleration of resonant electrons by coherent whistler mode waves in a dipole magnetic field. J Geophys Res 112:A06236. https://doi.org/10.1029/2006JA012243View ArticleGoogle Scholar
Omura Y, Katoh Y, Summers D (2008) Theory and simulation of the generation of whistler-mode chorus. J Geophys Res 113:A04223. https://doi.org/10.1029/2007JA012622View ArticleGoogle Scholar
Omura Y, Hikishima M, Katoh Y, Summers D, Yagitani S (2009) Nonlinear mechanisms of lower-band and upper-band VLF chorus emissions in the magnetosphere. J Geophys Res. https://doi.org/10.1029/2009JA014206Google Scholar
Shoji M, Miyoshi Y, Katoh Y, Keika K, Angelopoulos V, Kasahara S, Asamura K, Nakamura S, Omura Y (2017) Ion hole formation and nonlinear generation of electromagnetic ion cyclotron waves: THEMIS observations. Geophys Res Lett. https://doi.org/10.1002/2017GL074254Google Scholar
Summers D, Omura Y (2007) Ultra-relativistic acceleration of electrons in planetary magnetospheres. Geophys Res Lett 34:L24205. https://doi.org/10.1029/2007GL032226View ArticleGoogle Scholar
Summers D, Thorne RM, Xiao F (1998) Relativistic theory of wave–particle resonant diffusion with application to electron acceleration in the magnetosphere. J Geophys Res 103:20487View ArticleGoogle Scholar
Geospace Exploration by the ERG mission | CommonCrawl |
Existence of chaos for partial difference equations via tangent and cotangent functions
Haihong Guo1 &
Wei Liang1
Advances in Difference Equations volume 2021, Article number: 1 (2021) Cite this article
This paper is concerned with the existence of chaos for a type of partial difference equations. We establish four chaotification schemes for partial difference equations with tangent and cotangent functions, in which the systems are shown to be chaotic in the sense of Li–Yorke or of both Li–Yorke and Devaney. For illustration, we provide three examples are provided.
In this paper, we focus on the existence of chaos in the following partial difference equation:
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr), $$
where \(n\geq 0\) is the time step, m is the lattice point with \(0\leq m\leq k<+\infty \), \(f:D\subset {\mathbf{R}^{2}}\to {\mathbf{R}}\) is a map, and \(k+1\) is the system size. In many engineering applications, such as imaging, digital filter, and spatial dynamical system, Eq. (1) plays an important role [1, 2].
In the past years, with the development of chaos theory, chaos has been applied in many fields, such as physics, chemistry, engineering, and mathematics. In mathematics, chaos has become a significant branch of dynamical systems [3]. Furthermore, anticontrol of chaos (chaotification) is an important branch of chaos, and many researchers devoted much effort to chaotification. The first important result was obtained by Chen and Liu [4] proved that Eq. (1) in \({\mathbf{R}^{3}}\) is chaotic in the Li–Yorke sense by constructing spatial periodic orbits of specified period. Later, Eq. (1) was reformulated into a discrete system [5]. By applying this method Shi [6] established some criteria of chaos by applying chaos in scalar ordinary difference equations and snap-back repeller theory. Recently, chaotification problems for Eq. (1) with general controllers, sawtooth functions, and mod operations were studied, respectively, and all the controlled systems were proved to be chaotic in the sense of both Devaney and Li–Yorke [7–9]. In [10], two chaotification schemes of Eq. (1) via sine functions,
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \sin \bigl(\mu x(n,m)\bigr), $$
were established for \(\mu >1\). Furthermore, we proved that not only the above controlled system but also Eq. (1) with cosine functions are chaotic in the sense of both Li–Yorke and Devaney for \(\mu =1\) [11].
As one of the main elements of basic elementary functions, trigonometric functions are of great importance. Sine, cosine, tangent, and cotangent functions are basic ones. It is known that sine and cosine are continuous and have a similar geometric shape with sawtooth functions and mod operations [6–8, 12, 13]. However, tangent and cotangent are piecewise continuous, and their geometric shapes are different from those of sine, cosine, sawtooth, and mod. Can tangent and cotangent functions be viewed as controllers to make the controlled Eq. (1) to be chaotic? In this paper, we attempt to address such an interesting question and try to establish chaotification schemes for the following controlled systems:
$$\begin{aligned}& x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \tan\bigl(x(n,m)\bigr), \end{aligned}$$
$$\begin{aligned}& x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \cot\bigl(x(n,m)\bigr), \end{aligned}$$
$$\begin{aligned}& x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \tan\bigl(x(n,m+1)\bigr), \end{aligned}$$
$$\begin{aligned}& x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \cot\bigl(x(n,m+1)\bigr). \end{aligned}$$
The rest of this paper is organized as follows. In Sect. 2, we list some basic concepts and lemmas about chaos. In Sects. 3, we consider anticontrol of chaos of Eq. (1) with tangent and cotangent functions, give four theorems, and prove that all the controlled systems are chaotic in the sense of Li–Yorke or of both Li–Yorke and Devaney by the coupled-expansion theory. Finally, in Sect. 4, we provide three illustrative examples.
Now we introduce some basic concepts and lemmas.
([14])
Let \((X,d)\) be a metric space, and let \(F:X\to X\) be a map. A subset S of X is called a scrambled set of F if for any two different points \(x,y\in S\),
$$ \liminf_{n\to \infty }d\bigl(F^{n}(x),F^{n}(y) \bigr)=0, \qquad \limsup_{n\to \infty }d\bigl(F^{n}(x),F^{n}(y) \bigr)>0. $$
The map F is said to be chaotic in the Li–Yorke sense if there exists an uncountable scrambled set S of F.
A map \(F:V\subset X\to V\) is said to be chaotic on V in the sense of Devaney if
F is topologically transitive in V;
the periodic points of F in V are dense in V;
F has sensitive dependence on initial conditions in V.
By the result of [16], conditions (i) and (ii) imply (iii) if F is continuous in V that contains infinitely many points. Under some conditions, chaos in the sense of Devaney is stronger than that of Li–Yorke [17].
A nonperiodic boundary condition is given for Eq. (1) as
$$ x(n,k+1)=\varphi \bigl(x(n,p)\bigr),\quad n\geq 0, 0\leq p\leq k, $$
where p is an integer, and \(\varphi :I\subset \mathbf{R}\to \mathbf{R}\) is a map. For any given initial condition \(x(0,m)=\phi (m)\), \(0\leq m\leq k+1\), where ϕ satisfies (6), Eq. (1) obviously has a unique solution satisfying this condition. By setting
$$ x_{n}={ \bigl(}x(n,0), x(n,1),\ldots , x(n,k){ \bigr)}^{T} \in {\mathbf{R}^{k+1}},\quad n\geq 0, $$
Equation (1) with (6) can be written as
$$ x_{n+1}=F(x_{n}), \quad n\geq 0, $$
$$ F(x_{n})={ \bigl(}f\bigl(x(n,0),x(n,1)\bigr),f\bigl(x(n,1),x(n,2) \bigr),\ldots ,f\bigl(x(n,k), \varphi \bigl(x(n,p)\bigr)\bigr){ \bigr)}^{T}. $$
System (7) is called the system induced by Eq. (1) with (6).
([8])
Equation (1) with (6) is said to be chaotic in the sense of Devaney (or Li–Yorke) on \(V\subset {\mathbf{R}}^{k+1}\) if its induced system (7) is chaotic in the sense of Devaney (or Li–Yorke) on V.
Let \((X,d)\) be a metric space, and let \(f : D \subset X \rightarrow X \) be a map. If there exist m (≥2) subsets \(V_{i}\) (\(1 \leq i \leq m\)) of D with \(V_{i} \cap V_{j} = \partial _{D}V_{i} \cap \partial _{D}V_{j}\) for each pair of \((i, j)\), \(1 \leq i \neq j \leq m\), such that
$$ f(V_{i})\supset \bigcup_{j=1} ^{m}V_{j},\quad 1 \leq i \leq m, $$
where \(\partial _{D}V_{i}\) is the relative boundary of \(V_{i}\) with respect to D, then f is said to be a coupled-expanding map in \(V_{i}\), \(1 \leq i \leq m\). Further, the map f is said to be a strictly coupled-expanding map in \(V_{i}\), \(1 \leq i \leq m \), if \(d(V_{i}, V_{j}) > 0\) for all \(1 \leq i \neq j \leq m \).
Lemma 5
Let \((X,d)\) be a metric space, and let \(V_{j}\ (1\leq j\leq m)\) be disjoint compact sets of X. If \(f: D\equiv \bigcup_{j=1}^{m}V_{j}\rightarrow X\) is a strictly coupled-expanding continuous map in \(V_{j}\), \(1\leq j\leq m\), then f is chaotic in the sense of Li–Yorke.
([20, 21])
Let \((X, d)\) be a complete metric space, and let \(f:D\subset X\rightarrow X\) be a map. Assume that there exist k disjoint bounded closed subsets \(V_{i}\) of D, \(1\leq i\leq k\), such that f is continuous in \(\bigcup_{i=1}^{k}V_{i}\) and satisfies
f is strictly coupled-expanding in \(V_{i}\), \(1\leq i\leq k\);
there exists a constant \(\lambda >1\) such that
$$ d\bigl(f(x),f(y)\bigr)\geq \lambda d(x,y), \quad \forall x,y\in V_{i}, 1 \leq i\leq k. $$
Then f has an invariant Cantor set \(V\subset \bigcup_{i=1}^{k}V_{i}\) such that \(f:V\rightarrow V\) is topologically conjugate to the subshift \(\Sigma ^{+}_{k}\to \Sigma ^{+}_{k}\). Consequently, f is chaotic on V in the Devaney and Li–Yorke senses.
Main results
In this section, we establish four chaotification schemes for Eq. (1) with tangent and cotangent functions.
Theorem 1
Consider the controlled system (2), that is,
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \tan\bigl(x(n,m)\bigr),\quad n\geq 0, 0\leq m\leq k< +\infty $$
with (6). Suppose that
there exist positive constants r and L such that
$$ \bigl\vert f(x_{1},y_{1})-f(x_{2},y_{2}) \bigr\vert \leq L \max \bigl\{ \vert x_{1}-x_{2} \vert , \vert y_{1}-y_{2} \vert \bigr\} ,\quad \forall x_{1},x_{2},y_{1},y_{2}\in [-r,r]; $$
\(\varphi :[-r, r]\to [-r, r]\) is a map with \(\varphi (0)=0\), and there exists a constant \(\lambda >0\) such that
$$ \bigl\vert \varphi (x)-\varphi (y) \bigr\vert \leq \lambda \vert x-y \vert ,\quad \forall x,y\in [-r,r]. $$
If \(r>5\pi /4\), then for each constant ε satisfying
$$ \varepsilon >\varepsilon _{0}:= \max { \biggl\{ } \frac{5\pi }{4}\bigl(1+L\max \{1, \lambda \}\bigr)-f(0,0), \frac{\pi }{4}\bigl(1+5L\max \{1,\lambda \}\bigr)+f(0,0){ \biggr\} }, $$
there exists a Cantor set \(\Lambda _{1}\subset [-\frac{\pi }{4},\frac{\pi }{4}]^{k+1}\cup [ \frac{3\pi }{4},\frac{5\pi }{4}]^{k+1}\) such that system (2) with (6) is chaotic on \(\Lambda _{1}\) in the Li–Yorke sense. Further, for each constant ε satisfying
$$ \varepsilon > \max { \bigl\{ }\varepsilon _{0},1+L\max \{1,\lambda \}{ \bigr\} }, $$
there exists a Cantor set \(\Lambda _{2}\subset [-\frac{\pi }{4},\frac{\pi }{4}]^{k+1}\cup [ \frac{3\pi }{4},\frac{5\pi }{4}]^{k+1}\) such that system (2) with (6) is chaotic on \(\Lambda _{2}\) in the Li–Yorke and Devaney senses.
We use Lemmas 5 and 6. Let
$$ x_{n+1}=F(x_{n})+\varepsilon \operatorname{Tan}(x_{n}):=G_{\varepsilon }(x_{n}),\quad n\geq 0, $$
be the induced system of the controlled system (2) with (6), where \(F(x_{n})\) is (8), and
$$ \operatorname{Tan}(x_{n})={ \bigl(}\tan\bigl(x(n,0)\bigr), \tan \bigl(x(n,1)\bigr),\ldots , \tan\bigl(x(n,k)\bigr){ \bigr)}^{T}. $$
$$ V_{1}=\biggl[-\frac{\pi }{4},\frac{\pi }{4} \biggr]^{k+1}, \qquad V_{2}=\biggl[\frac{3\pi }{4}, \frac{5\pi }{4}\biggr]^{k+1}. $$
Then \(V_{1},V_{2}\subset [-r,r]^{k+1}\) are nonempty, closed, and bounded, and
$$ d(V_{1},V_{2})=\inf { \bigl\{ } \Vert x-y \Vert : x \in V_{1},y\in V_{2}{ \bigr\} } = \frac{\pi }{2}>0. $$
The whole proof is divided into two parts.
Step 1. System (2) with (6) is chaotic in the Li–Yorke sense.
By Lemma 5 we will show that \(G_{\varepsilon } \) is a strictly coupled-expanding map in \(V_{1}\) and \(V_{2}\).
For each \(x=(x(0), x(1),\ldots , x(k))^{T} \in V_{1}\) with \(x(j)=-\pi /4\), from (9) it follows that, for \(0\leq j\leq k-1\),
$$ \begin{aligned} G_{\varepsilon ,j}(x) &=f\bigl(x(j),x(j+1)\bigr)+ \varepsilon \tan\bigl(x(j)\bigr) \\ & =f\biggl(-\frac{\pi }{4},x(j+1)\biggr)+\varepsilon \tan\biggl(- \frac{\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \frac{\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} -\varepsilon +f(0,0) \\ & =\frac{\pi }{4}L-\varepsilon +f(0,0)\leq -\frac{\pi }{4},\end{aligned} $$
and for \(j=k\), from (6), (9), and (10) it follows that
$$ \begin{aligned} G_{\varepsilon ,k}(x) &=f\bigl(x(k),\varphi \bigl(x(p)\bigr)\bigr)+\varepsilon \tan\bigl(x(k)\bigr) \\ & =f\biggl(-\frac{\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+\varepsilon \tan\biggl(-\frac{\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \frac{\pi }{4}, \bigl\vert \varphi \bigl(x(p) \bigr) \bigr\vert \biggr\} - \varepsilon +f(0,0) \\ & \leq L \max \biggl\{ \frac{\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} - \varepsilon +f(0,0) \\ & \leq \frac{\pi }{4}L\max \{1,\lambda \}-\varepsilon +f(0,0) \leq - \frac{\pi }{4}.\end{aligned} $$
For each \(x\in V_{1}\) with \(x(j)=\pi /4\), it follows from (6), (9), and (10) that, for \(0\leq j\leq k-1\),
$$ \begin{aligned} G_{\varepsilon ,j}(x) &= f\biggl( \frac{\pi }{4},x(j+1)\biggr)+ \varepsilon \tan\biggl(\frac{\pi }{4} \biggr) \\ & \geq -L \max \biggl\{ \frac{\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{\pi }{4}L+\varepsilon +f(0,0)\geq \frac{5\pi }{4},\end{aligned} $$
and for \(j=k\),
$$ \begin{aligned} G_{\varepsilon ,k}(x) &= f\biggl( \frac{\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \tan\biggl( \frac{\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \frac{\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} + \varepsilon +f(0,0) \\ & \geq -\frac{\pi }{4}L\max \{1,\lambda \}+\varepsilon +f(0,0) \geq \frac{5\pi }{4}.\end{aligned} $$
By (9) and (10) \(G_{\varepsilon }\) is continuous in \([-r,r]^{k+1}\). By the intermediate value theorem and (11)–(14) we have \(G_{\varepsilon }(V_{1})\supset V_{1}\cup V_{2}\).
For each \(x\in V_{2}\) with \(x(j)=3\pi /4\), we have that for \(0\leq j\leq k-1\),
$$ \begin{aligned} G_{\varepsilon ,j}(x) &= f\biggl( \frac{3\pi }{4},x(j+1)\biggr)+ \varepsilon \tan\biggl(\frac{3\pi }{4} \biggr) \\ & \leq L \max \biggl\{ \frac{3\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} -\varepsilon +f(0,0) \\ & \leq \frac{5\pi }{4}L-\varepsilon +f(0,0)\leq - \frac{\pi }{4},\end{aligned} $$
and for \(j= k\),
$$ \begin{aligned} G_{\varepsilon ,k}(x) &= f\biggl( \frac{3\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \tan\biggl( \frac{3\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \frac{3\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} - \varepsilon +f(0,0) \\ & \leq \frac{5\pi }{4}L\max \{1,\lambda \}-\varepsilon +f(0,0) \leq - \frac{\pi }{4}.\end{aligned} $$
$$ \begin{aligned} G_{\varepsilon ,j}(x) &= f\biggl( \frac{5\pi }{4},x(j+1)\biggr)+ \varepsilon \tan\biggl(\frac{5\pi }{4} \biggr) \\ & \geq -L \max \biggl\{ \frac{5\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{5\pi }{4}L+\varepsilon +f(0,0)\geq \frac{5\pi }{4}\end{aligned} $$
$$ \begin{aligned} G_{\varepsilon ,k}(x) &= f\biggl( \frac{5\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \tan\biggl( \frac{5\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \frac{5\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} + \varepsilon +f(0,0) \\ & \geq -\frac{5\pi }{4}L\max \{1,\lambda \}+\varepsilon +f(0,0) \geq \frac{5\pi }{4}.\end{aligned} $$
By the intermediate value theorem and (15)–(18) we have \(G_{\varepsilon }(V_{2})\supset V_{1}\cup V_{2}\).
By the above discussion, \(G_{\varepsilon } \) is a strictly coupled-expanding map in \(V_{1}\) and \(V_{2}\). Therefore by Lemma 5 system (2) with (6) is chaotic in the Li-Yorke sense.
Step 2. System (2) with (6) is chaotic in both Li–Yorke and Devaney senses.
Since \(V_{1},V_{2}\subset [-r,r]^{k+1}\), from (6), (9), and (10) it follows that for all \(x,y\in V_{1}\) and \(x,y\in V_{2}\),
$$ \begin{aligned} \bigl\Vert F(x)-F(y) \bigr\Vert &=\max \bigl\{ \bigl\vert f\bigl(x(j),x(j+1)\bigr) -f\bigl(y(j),y(j+1)\bigr) \bigr\vert , 0 \leq j \leq k\bigr\} \\ &\leq L\max \bigl\{ \bigl\vert x(j)-y(j) \bigr\vert , \bigl\vert \varphi \bigl(x(p)\bigr)-\varphi \bigl(y(p)\bigr) \bigr\vert , 0\leq j, p\leq k\bigr\} \\ & \leq L\max \bigl\{ \bigl\vert x(j)-y(j) \bigr\vert , \lambda \bigl\vert x(p)-y(p) \bigr\vert , 0\leq j, p\leq k \bigr\} \\ & \leq L\max \{1, \lambda \} \Vert x-y \Vert .\end{aligned} $$
On the other hand, by Lagrange's mean value theorem, for all \(x,y\in V_{1}\) and \(x,y\in V_{2}\),
$$ \begin{aligned} \bigl\Vert \operatorname{Tan}(x)- \operatorname{Tan}(y) \bigr\Vert &=\max \bigl\{ \bigl\vert \tan\bigl(x(j) \bigr)-\tan\bigl(y(j)\bigr) \bigr\vert ,0 \leq j\leq k\bigr\} \\ & =\max \bigl\{ \bigl\vert \operatorname{sec}^{2}\xi \bigl(x(j)-y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & \geq \Vert x-y \Vert , \end{aligned} $$
where \(\xi \in (-\frac{\pi }{4},\frac{\pi }{4}) \cup (\frac{3\pi }{4}, \frac{5\pi }{4})\). Hence from (19) and (20) it follows that for all \(x,y\in V_{1}\) and \(x,y\in V_{2}\),
$$ \bigl\Vert G_{\varepsilon }(x)-G_{\varepsilon }(y) \bigr\Vert \geq \bigl(\varepsilon -L\max \{1, \lambda \}\bigr) \Vert x-y \Vert . $$
Since \(\varepsilon -L\max \{1, \lambda \}>1\), \(G_{\varepsilon }\) satisfies assumption (ii) of Lemma 6. Together with the result obtained in step 1, by Lemma 6 system (2) with (6) is chaotic in both Li–Yorke and Devaney senses. The proof is complete. □
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \cot\bigl(x(n,m)\bigr),\quad n\geq 0, 0\leq m\leq k< +\infty $$
with (6). Suppose that all the conditions in Theorem 1hold. Then for all constants ε, r satisfying
$$ \varepsilon > \max { \biggl\{ }\frac{3\pi }{4}\bigl(1+L\max \{1,\lambda \} \bigr)-f(0,0), \frac{3\pi }{4}\bigl(1+L\max \{1,\lambda \}\bigr)+f(0,0){ \biggr\} } $$
and \(r>3\pi /4\), there exists a Cantor set \(\Lambda \subset [-\frac{3\pi }{4},-\frac{\pi }{4}]^{k+1}\cup [ \frac{\pi }{4},\frac{3\pi }{4}]^{k+1}\) such that system (3) with (6) is chaotic on Λ in both Li–Yorke and Devaney senses.
We use Lemmas 5 and 6. The induced system of (3) with (6) is
$$ x_{n+1}=F(x_{n})+\varepsilon \operatorname{Cot}(x_{n}):=H_{\varepsilon }(x_{n}),\quad n\geq 0, $$
where F is defined in (8), and
$$ \operatorname{Cot}(x_{n})={ \bigl(} \cot\bigl(x(n,0)\bigr), \cot \bigl(x(n,1)\bigr), \ldots , \cot\bigl(x(n,k)\bigr){ \bigr)}^{T}. $$
$$ \widetilde{V}_{1}=\biggl[-\frac{3\pi }{4},- \frac{\pi }{4}\biggr]^{k+1}, \qquad \widetilde{V}_{2}= \biggl[\frac{\pi }{4},\frac{3\pi }{4}\biggr]^{k+1}. $$
Obviously, \(\widetilde{V}_{1}, \widetilde{V}_{2}\subset [-r,r]^{k+1}\) are nonempty, closed, and bounded sets, and \(d(\widetilde{V}_{1},\widetilde{V}_{2})=\pi /2>0\).
First, we show that \(H_{\varepsilon }(\widetilde{V}_{i})\supset \widetilde{V}_{1}\cup \widetilde{V}_{2}\) for \(i=1, 2\).
For each \(x\in \widetilde{V}_{1}\) with \(x(j)=-3\pi /4\), from (6), (9), and (10) it follows that for \(0\leq j\leq k-1\),
$$ \begin{aligned} H_{\varepsilon ,j}(x) &= f\biggl(- \frac{3\pi }{4},x(j+1)\biggr)+ \varepsilon \cot\biggl(-\frac{3\pi }{4} \biggr) \\ & \geq -L \max \biggl\{ \frac{3\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{3\pi }{4}L+\varepsilon +f(0,0)\geq \frac{3\pi }{4},\end{aligned} $$
$$ \begin{aligned} H_{\varepsilon ,k}(x) &= f\biggl(- \frac{3\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \cot\biggl(- \frac{3\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \frac{3\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} + \varepsilon +f(0,0) \\ & =-\frac{3\pi }{4}L\max \{1,\lambda \}+\varepsilon +f(0,0) \geq \frac{3\pi }{4}.\end{aligned} $$
For each \(x\in \widetilde{V}_{1}\) with \(x(j)=-\pi /4\), from (6), (9), and (10) it follows that for \(0\leq j\leq k-1\),
$$ \begin{aligned} H_{\varepsilon ,j}(x) &= f\biggl(- \frac{\pi }{4},x(j+1)\biggr)+ \varepsilon \cot\biggl(-\frac{\pi }{4} \biggr) \\ & \leq L\max \biggl\{ \frac{\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} -\varepsilon +f(0,0) \\ & \leq \frac{3\pi }{4}L-\varepsilon +f(0,0)\leq - \frac{3\pi }{4},\end{aligned} $$
$$ \begin{aligned} H_{\varepsilon ,k}(x) &= f\biggl(- \frac{\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \cot\biggl(- \frac{\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \frac{\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} - \varepsilon +f(0,0) \\ & \leq \frac{3\pi }{4}L\max \{1,\lambda \}-\varepsilon +f(0,0) \leq - \frac{3\pi }{4}.\end{aligned} $$
For each \(x\in \widetilde{V}_{2}\) with \(x(j)=\pi /4\), for \(0\leq j\leq k-1\),
$$ \begin{aligned} H_{\varepsilon ,j}(x) &= f\biggl( \frac{\pi }{4},x(j+1)\biggr)+ \varepsilon \cot\biggl(\frac{\pi }{4} \biggr) \\ & \geq -L \max \biggl\{ \frac{\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} +\varepsilon +f(0,0) \\ & \geq -\frac{3\pi }{4}L+\varepsilon +f(0,0)\geq \frac{3\pi }{4},\end{aligned} $$
$$ \begin{aligned} H_{\varepsilon ,k}(x) &= f\biggl( \frac{\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \cot\biggl( \frac{\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \frac{\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} + \varepsilon +f(0,0) \\ & \geq -\frac{3\pi }{4}L\max \{1,\lambda \}+\varepsilon +f(0,0) \geq \frac{3\pi }{4}.\end{aligned} $$
For each \(x\in \widetilde{V}_{2}\) with \(x(j)=3\pi /4\), for \(0\leq j\leq k-1\),
$$ \begin{aligned} H_{\varepsilon ,j}(x) &= f\biggl( \frac{3\pi }{4},x(j+1)\biggr)+ \varepsilon \cot\biggl(\frac{3\pi }{4} \biggr) \\ & \leq L \max \biggl\{ \frac{3\pi }{4}, \bigl\vert x(j+1) \bigr\vert \biggr\} -\varepsilon +f(0,0) \\ & =\frac{3\pi }{4}L-\varepsilon +f(0,0)\leq - \frac{3\pi }{4},\end{aligned} $$
$$ \begin{aligned} H_{\varepsilon ,k}(x) &= f\biggl( \frac{3\pi }{4},\varphi \bigl(x(p)\bigr)\biggr)+ \varepsilon \cot\biggl( \frac{3\pi }{4}\biggr) \\ & \leq L\max \biggl\{ \frac{3\pi }{4},\lambda \bigl\vert x(p) \bigr\vert \biggr\} - \varepsilon +f(0,0) \\ & \leq \frac{3\pi }{4}L\max \{1,\lambda \}-\varepsilon +f(0,0) \leq - \frac{3\pi }{4}.\end{aligned} $$
By the intermediate value theorem and (21)–(28), we have \(H_{\varepsilon }(\widetilde{V}_{i})\supset \widetilde{V}_{1}\cup \widetilde{V}_{2}\), \(i=1, 2\). So by Lemma 5 system (3) with (6) is chaotic in the Li–Yorke sense.
Next, we show that \(H_{\varepsilon }\) satisfies assumption (ii) in Lemma 6.
By Lagrange's mean value theorem we can verify that for all \(x,y\in \widetilde{V}_{1}\) and \(x,y\in \widetilde{V}_{2}\),
$$ \begin{aligned} \bigl\Vert \operatorname{Cot}(x)- \operatorname{Cot}(y) \bigr\Vert &=\max \bigl\{ \bigl\vert \cot\bigl(x(j) \bigr)-\cot\bigl(y(j)\bigr) \bigr\vert ,0 \leq j\leq k\bigr\} \\ & =\max \bigl\{ \bigl\vert -\operatorname{csc}^{2}\theta \bigl(x(j)-y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & \geq \Vert x-y \Vert , \end{aligned} $$
where \(\theta \in (-\frac{3\pi }{4},-\frac{\pi }{4}) \cup (\frac{\pi }{4}, \frac{3\pi }{4})\). Hence, by (19), for all \(x,y\in \widetilde{V}_{1}\) and \(x,y\in \widetilde{V}_{2}\),
$$ \bigl\Vert H_{\varepsilon }(x)-H_{\varepsilon }(y) \bigr\Vert \geq \bigl(\varepsilon -L\max \{1, \lambda \}\bigr) \Vert x-y \Vert . $$
Since \(\varepsilon >\frac{3}{4}\pi (1+L\max \{1,\lambda \})\), we have \(\varepsilon -L\max \{1, \lambda \}>1\). Thus \(H_{\varepsilon }\) satisfies assumption (ii) in Lemma 6. By Lemma 6 system (3) with (6) is chaotic in both Li–Yorke and Devaney senses. This completes the proof. □
Now we consider the controlled systems (4) and (5). For convenience, we give a periodic boundary condition for Eq. (1):
$$ x(n,k+1)=x(n,0),\quad n\geq 0. $$
We have the following two results.
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \tan\bigl(x(n,m+1)\bigr),\quad n \geq 0, 0\leq m\leq k< +\infty , $$
with (29). Suppose that condition (i) in Theorem 1holds. Then all the results in Theorem 1hold for system (4) with (29), except that \(\max \{1, \lambda \}\) in Theorem 1is replaced by 1.
The induced system of (4) with (29) can be written as
$$ x_{n+1}=\widetilde{F}(x_{n})+\varepsilon \operatorname{Tan}(\widehat{x}_{n}):= \widetilde{G}_{\varepsilon }(x_{n}),\quad n\geq 0, $$
$$\begin{aligned}& \widetilde{F}(x_{n})={ \bigl(}f\bigl(x(n,0),x(n,1)\bigr),f \bigl(x(n,1),x(n,2)\bigr),\ldots ,f\bigl(x(n,k), x(n,0)\bigr){ \bigr)}^{T}, \\& \operatorname{Tan}(\widehat{x}_{n})={ \bigl(}\tan\bigl(x(n,1)\bigr), \tan\bigl(x(n,2)\bigr), \ldots , \tan\bigl(x(n,k)\bigr), \tan\bigl(x(n,0)\bigr){ \bigr)}^{T}. \end{aligned}$$
Let \(V_{1}\) and \(V_{2}\) be the same as in Theorem 1. We divide the proof into two parts.
Step 1. System (4) with (29) is chaotic in the Li–Yorke sense.
For each \(x\in V_{1}\) with \(x(j+1)=-\pi /4\), from (9) it follows that for \(0\leq j\leq k-1\),
$$ \begin{aligned} \widetilde{G}_{\varepsilon ,j}(x) &=f \bigl(x(j),x(j+1)\bigr)+\varepsilon \tan\bigl(x(j+1)\bigr) \\ & =f\biggl(x(j),-\frac{\pi }{4}\biggr)+\varepsilon \tan\biggl(- \frac{\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \bigl\vert x(j) \bigr\vert ,\frac{\pi }{4} \biggr\} -\varepsilon +f(0,0) \\ & =\frac{\pi }{4}L-\varepsilon +f(0,0)\leq -\frac{\pi }{4},\end{aligned} $$
and for \(j=k\), from (9) and (29) it follows that \(x(k+1)=x(0)=-\pi /4\), so that
$$ \begin{aligned} \widetilde{G}_{\varepsilon ,k}(x) &=f\bigl(x(k),x(0) \bigr)+\varepsilon \tan\bigl(x(0)\bigr) \\ & =f\biggl(x(k),-\frac{\pi }{4}\biggr)+\varepsilon \tan\biggl(- \frac{\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \bigl\vert x(k) \bigr\vert ,\frac{\pi }{4} \biggr\} -\varepsilon +f(0,0) \\ & =\frac{\pi }{4}L-\varepsilon +f(0,0)\leq -\frac{\pi }{4}.\end{aligned} $$
For each \(x\in V_{1}\) with \(x(j+1)=\pi /4\), by (9) and (29), \(x(k+1)=x(0)=\pi /4\) for \(j=k\). Therefore for \(0\leq j\leq k\),
$$ \begin{aligned} \widetilde{G}_{\varepsilon ,j}(x) &= f\biggl(x(j), \frac{\pi }{4}\biggr)+\varepsilon \tan\biggl(\frac{\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \bigl\vert x(j) \bigr\vert ,\frac{\pi }{4} \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{\pi }{4}L+\varepsilon +f(0,0)\geq \frac{5\pi }{4}.\end{aligned} $$
For each \(x\in V_{2}\) with \(x(j+1)=3\pi /4\), \(0\leq j\leq k\), from (9) and (29) it follows that
$$ \begin{aligned} \widetilde{G}_{\varepsilon ,j}(x) &= f\biggl(x(j), \frac{3\pi }{4}\biggr)+\varepsilon \tan\biggl(\frac{3\pi }{4}\biggr) \\ & \leq L \max \biggl\{ \bigl\vert x(j) \bigr\vert ,\frac{3\pi }{4} \biggr\} -\varepsilon +f(0,0) \\ & \leq \frac{5\pi }{4}L-\varepsilon +f(0,0)\leq - \frac{\pi }{4},\end{aligned} $$
and for each \(x\in V_{2}\) with \(x(j+1)=5\pi /4\), \(0\leq j\leq k\), we have
$$ \begin{aligned} \widetilde{G}_{\varepsilon ,j}(x) &= f\biggl(x(j), \frac{5\pi }{4}\biggr)+\varepsilon \tan\biggl(\frac{5\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \bigl\vert x(j) \bigr\vert ,\frac{5\pi }{4} \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{5\pi }{4}L+\varepsilon +f(0,0)\geq \frac{5\pi }{4}.\end{aligned} $$
By the intermediate value theorem and (31)–(35) we have \(\widetilde{G}_{\varepsilon }(V_{i})\supset V_{1}\cup V_{2}\), \(i=1, 2\). Therefore by Lemma 5 system (4) with (29) is chaotic in the Li–Yorke sense.
Step 2. System (4) with (29) is chaotic in both Li–Yorke and Devaney senses.
Since \(V_{1},V_{2}\subset [-r,r]^{k+1}\), from (9) and (29) it follows that for all \(x,y\in V_{1}\) and \(x,y\in V_{2}\),
$$ \begin{aligned} \bigl\Vert \widetilde{F}(x)-\widetilde{F}(y) \bigr\Vert &=\max \bigl\{ \bigl\vert f\bigl(x(j),x(j+1)\bigr) -f\bigl(y(j),y(j+1) \bigr) \bigr\vert , 0\leq j\leq k\bigr\} \\ &\leq L\max \bigl\{ \bigl\vert x(j)-y(j) \bigr\vert , 0\leq j\leq k\bigr\} \\ & =L \Vert x-y \Vert .\end{aligned} $$
$$ \begin{aligned} \bigl\Vert \operatorname{Tan}(\widehat{x})- \operatorname{Tan}(\widehat{y}) \bigr\Vert &=\max \bigl\{ \bigl\vert \tan \bigl(x(j)\bigr)- \tan\bigl(y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & =\max \bigl\{ \bigl\vert \operatorname{sec}^{2}\eta \bigl(x(j)-y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & \geq \Vert x-y \Vert , \end{aligned} $$
where \(\eta \in (-\frac{\pi }{4},\frac{\pi }{4}) \cup (\frac{3\pi }{4}, \frac{5\pi }{4})\). Hence from (36) and (37) it follows that
$$ \bigl\Vert \widetilde{G}_{\varepsilon }(x)-\widetilde{G}_{\varepsilon }(y) \bigr\Vert \geq (\varepsilon -L) \Vert x-y \Vert , \quad \forall x,y\in V_{1} \mbox{ or } x,y \in V_{2}. $$
Since \(\varepsilon -L>1\), \(\widetilde{G}_{\varepsilon }\) satisfies assumption (ii) of Lemma 6. Together with the result obtained in step 1, by Lemma 6 system (4) with (29) is chaotic in both Li–Yorke and Devaney senses. This completes the proof. □
Remark 1
The boundary conditions imposed on systems (2)–(3) and (4)–(5) are different. If (6) is imposed on system (4), then in (32), \(x(k+1)=\varphi (x(p))=-\pi /4\), \(0\leq p\leq k\), but we cannot ensure that \(x(p)\in [-\frac{\pi }{4}, \frac{\pi }{4}]\). Thus \(x\in V_{1}\) may not hold. Therefore (29) is imposed on systems (4) and (5).
$$ x(n+1,m)=f\bigl(x(n,m),x(n,m+1)\bigr)+\varepsilon \cot\bigl(x(n,m+1)\bigr),\quad n \geq 0, 0\leq m\leq k< +\infty , $$
with (29). Suppose that condition (i) in Theorem 1holds. Then all the results in Theorem 2hold for system (5) with (29), where \(\max \{1, \lambda \}=1\).
$$ x_{n+1}=\widetilde{F}(x_{n})+\varepsilon \operatorname{Cot}(\widehat{x}_{n}):= \widetilde{H}_{\varepsilon }(x_{n}),\quad n\geq 0, $$
be the induced system of system (5) with (29), where F̃ is defined in (30), and
$$ \operatorname{Cot}(\widehat{x}_{n})={ (} \cot\bigl(x(n,1)\bigr), \cot\bigl(x(n,2)\bigr), \ldots , \cot\bigl(x(n,k),\cot\bigl(x(n,0)\bigr){ \bigr)}^{T}. $$
Let \(\widetilde{V}_{1}\) and \(\widetilde{V}_{2}\) be the same as in Theorem 2.
For each \(x\in \widetilde{V}_{1}\) with \(x(j+1)=-3\pi /4\), \(0\leq j\leq k\), from (9) and (29) it follows that
$$ \begin{aligned} \widetilde{H}_{\varepsilon ,j}(x) &= f\biggl(x(j),- \frac{3\pi }{4}\biggr)+\varepsilon \cot\biggl(-\frac{3\pi }{4}\biggr) \\ & \geq -L \max \biggl\{ \bigl\vert x(j) \bigr\vert ,\frac{3\pi }{4} \biggr\} +\varepsilon +f(0,0) \\ & =-\frac{3\pi }{4}L+\varepsilon +f(0,0)\geq \frac{3\pi }{4},\end{aligned} $$
and for each \(x\in \widetilde{V}_{1}\) with \(x(j+1)=-\pi /4\), \(0\leq j\leq k\), from (9) and (29) it follows that
$$ \begin{aligned} \widetilde{H}_{\varepsilon ,j}(x) &= f\biggl(x(j),- \frac{\pi }{4}\biggr)+\varepsilon \cot\biggl(-\frac{\pi }{4}\biggr) \\ & \leq L\max \biggl\{ \bigl\vert x(j) \bigr\vert , \frac{\pi }{4} \biggr\} -\varepsilon +f(0,0) \\ & \leq \frac{3\pi }{4}L-\varepsilon +f(0,0)\leq - \frac{3\pi }{4}.\end{aligned} $$
By the intermediate value theorem and (38)–(39) we have \(\widetilde{H}_{\varepsilon }(\widetilde{V}_{1})\supset \widetilde{V}_{1} \cup \widetilde{V}_{2}\). Similarly, we can prove that \(\widetilde{H}_{\varepsilon }(\widetilde{V}_{2})\supset \widetilde{V}_{1} \cup \widetilde{V}_{2}\).
$$ \begin{aligned} \bigl\Vert \operatorname{Cot}(\widehat{x})- \operatorname{Cot}(\widehat{y}) \bigr\Vert &=\max \bigl\{ \bigl\vert \cot \bigl(x(j)\bigr)- \cot\bigl(y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & =\max \bigl\{ \bigl\vert -\operatorname{csc}^{2}\theta \bigl(x(j)-y(j)\bigr) \bigr\vert ,0\leq j\leq k\bigr\} \\ & \geq \Vert x-y \Vert , \end{aligned} $$
where \(\theta \in (-\frac{3\pi }{4},-\frac{\pi }{4}) \cup (\frac{\pi }{4}, \frac{3\pi }{4})\). Together with (36), for all \(x,y\in \widetilde{V}_{1}\) and \(x,y\in \widetilde{V}_{2}\), we have
$$ \bigl\Vert \widetilde{H}_{\varepsilon }(x)-\widetilde{H}_{\varepsilon }(y) \bigr\Vert \geq (\varepsilon -L) \Vert x-y \Vert , $$
where \(\varepsilon >\frac{3\pi }{4}(1+L)>1+L\). By Lemma 6 system (5) with (29) is chaotic in both Li–Yorke and Devaney senses. This completes the proof. □
In this section, we discuss three examples with computer simulations.
Consider the controlled system (2) with (6), where
$$ f(x,y)= \textstyle\begin{cases} \frac{1}{32}xy+\frac{1}{2}\pi , & x,y\in [-4,4], \\ \frac{1}{2}^{|xy|}, & \mbox{else}, \end{cases} $$
$$ \varphi (x)=\frac{1}{2}x, \quad \forall x \in {\mathbf{R}}. $$
It is evident that \(|f_{x}(x,y)|+|f_{y}(x,y)|\leq 1/4\) for all \(x,y\in [-4,4]\), that is,
$$ \bigl\vert f(x_{1},y_{1})-f(x_{2},y_{2}) \bigr\vert \leq \frac{1}{4}\max \bigl\{ \vert x_{1}-x_{2} \vert , \vert y_{1}-y_{2} \vert \bigr\} ,\quad \forall x_{1},x_{2},y_{1},y_{2}\in [-4,4]. $$
Thus f and φ satisfy all the assumptions in Theorem 1 with \(r=4\), \(L=1/4\), \(\lambda =1/2\), and \(f(0,0)=\pi /2\). By Theorem 1, for any \(\varepsilon >17\pi /16\), there exists a Cantor set \(\Lambda \subset [-\frac{1}{4}\pi ,\frac{1}{4}\pi ]^{k+1}\cup [ \frac{3}{4}\pi ,\frac{5}{4}\pi ]^{k+1}\) such that the controlled system is chaotic on Λ in both Li–Yorke and Devaney senses. Two simulation results on two-dimensional plane \((x(\cdot , 0), x(\cdot , 1))\) and three-dimensional space \((x(\cdot , 0), x(\cdot , 1), x(\cdot , 2))\) are given in Fig. 1 for \(p=1\), \(k=1,2\), and \(\varepsilon =9\pi /8\), which exhibit complicated dynamical behaviors of the controlled system on Λ.
Simulations for system (2) with (6), where \(n=0, 1, \ldots , 10\text{,}000\) and \(p=0\). In the 2-D graph, the initial values are taken as \(x(0,0)=0.1\) and \(x(0,1)=-0.1\). The initial values are \(x(0,0)=0.1\), \(x(0,1)=-0.1\), and \(x(0,2)=0.1\) in the 3-D graph
$$ f(x,y)= \textstyle\begin{cases} \frac{1}{9}x^{2}+\frac{1}{3}y, & x,y\in [-3,3], \\ \cos (x+y), & \mbox{else}, \end{cases} $$
$$ \varphi (x)=\frac{4}{3}x,\quad \forall x \in {\mathbf{R}}. $$
Obviously, \(f(0,0)=0\) and \(|f_{x}(x,y)|+|f_{y}(x,y)|\leq 1\) for all \(x,y\in [-3,3]\), which implies that
$$ \bigl\vert f(x_{1},y_{1})-f(x_{2},y_{2}) \bigr\vert \leq \max \bigl\{ \vert x_{1}-x_{2} \vert , \vert y_{1}-y_{2} \vert \bigr\} ,\quad \forall x_{1},x_{2},y_{1},y_{2}\in [-3,3]. $$
Hence f and φ satisfy all the assumptions in Theorem 2 with \(r=3\), \(L=1\), \(\lambda =4/3\). Thus, by Theorem 2, for any constant \(\varepsilon >7\pi /4\), there exists a Cantor set \(\Lambda \subset [-\frac{3}{4}\pi ,-\frac{1}{4}\pi ]^{k+1}\cup [ \frac{1}{4}\pi ,\frac{3}{4}\pi ]^{k+1}\) such that the controlled system (3) with (6) is chaotic on Λ in both Li–Yorke and Devaney senses. Two simulation results are shown in Fig. 2 for \(p=1\) and \(\varepsilon =2\pi \), which indicate that the controlled system has very complicated dynamical behaviors on Λ.
Simulations for system (3) with (6), where \(n=0, 1, \ldots , 10\text{,}000\) and \(p=1\). In the 2-D graph, \(k=1\), and the initial values are \(x(0,0)=1\), \(x(0,1)=-0.1\). In the 3-D graph, \(k=2\), and the initial value are \(x(0,0)=0.1\), \(x(0,1)=-0.1\), and \(x(0,2)=0.1\)
Consider the controlled system (5) with (29), where \(f(x,y)\) is (40). By the previous discussion, f satisfies all the assumptions in Theorem 4 with \(r=3\) and \(L=1\). Thus, by Theorem 4, for any constant \(\varepsilon >3\pi /2\), there exists a Cantor set \(\Lambda \subset [-\frac{3}{4}\pi ,-\frac{1}{4}\pi ]^{k+1}\cup [ \frac{1}{4}\pi ,\frac{3}{4}\pi ]^{k+1}\) such that the controlled system is chaotic on Λ in both Li–Yorke and Devaney senses. Simulation results are shown in Fig. 3 for \(\varepsilon =2\pi \), which show that the controlled system has very complicated dynamical behaviors on Λ.
Simulations for system (5) with (29), where \(n=0, 1, \ldots , 10\text{,}000\). In the 2-D graph, \(k=1\), and the initial values are \(x(0,0)=1\), \(x(0,1)=-0.1\). In the 3-D graph, \(k=2\), and the initial values are \(x(0,0)=0.1\), \(x(0,1)=-0.1\), and \(x(0,2)=0.1\)
Gang, H., Qu, Z.: Controlling spatiotemporal chaos in coupled map lattice systems. Phys. Rev. Lett. 72(1), 68–71 (1994)
Willeboordse, F.: The spatial logistic map as a simple prototype for spatiotemporal chaos. Chaos 13(2), 533–540 (2003)
Wiggins, S., Mazel, D.: Introduction to applied nonlinear dynamical systems and chaos. Comput. Phys. 4 (1998)
Chen, G., Liu, S.: On spatial periodic orbits and spatial chaos. Int. J. Bifurc. Chaos 13(04), 935–941 (2003)
MathSciNet Article Google Scholar
Chen, G., Tian, C., Shi, Y.: Stability and chaos in 2-D discrete systems. Chaos Solitons Fractals 25(3), 637–647 (2005)
Shi, Y.: Chaos in first-order partial difference equations. J. Differ. Equ. Appl. 14(2), 109–126 (2008)
Liang, W., Shi, Y., Zhang, C.: Chaotification for a class of first-order partial difference equations. Int. J. Bifurc. Chaos 14(2), 717–733 (2008)
Shi, Y., Yu, P., Chen, G.: Chaotification of discrete dynamical system in Banach spaces. Int. J. Bifurc. Chaos 16(09), 2615–2636 (2006)
Liang, W., Guo, H.: Chaotification of first-order partial difference equations. Int. J. Bifurc. Chaos 30(15), 2050229 (2020)
Liang, W., Zhang, Z.: Chaotification schemes of first-order partial difference equations via sine functions. J. Differ. Equ. Appl. 25, 665–675 (2019)
Liang, W., Zhang, Z.: Anti-control of chaos for first-order partial difference equations via sine and cosine functions. Int. J. Bifurc. Chaos 29(10), 1950140 (2019)
Chen, G., Lai, D.: Feedback anticontrol of discrete chaos. Int. J. Bifurc. Chaos 8(07), 1585–1590 (1998)
Wang, X., Chen, G.: Chaotification via arbitrarily small feedback controls: theory, method, and applications. Int. J. Bifurc. Chaos 10(03), 549–570 (2000)
Li, T., Yorke, J.: Period three implies chaos. Am. Math. Mon. 82, 985–992 (1975)
Devaney, R.: An Introduction to Chaotic Dynamical Systems, 2nd edn. Addison-Wesley, Reading (1989)
Banks, J., Brooks, J., Cairns, G.: On Devaney's definition of chaos. Am. Math. Mon. 99(4), 332–334 (1992)
Huang, W., Ye, X.: Devaney's chaos or 2-scattering implies Li–Yorke's chaos. Topol. Appl. 117(3), 259–272 (2002)
Shi, Y., Yu, P.: Chaos induced by regular snap-back repellers. J. Math. Anal. Appl. 337(2), 1480–1494 (2008)
Zhang, X., Shi, Y., Chen, G.: Constructing chaotic polynomial maps. Int. J. Bifurc. Chaos 19(02), 531–543 (2009)
Shi, Y., Ju, H., Chen, G.: Coupled-expanding maps and one-sided symbolic dynamical systems. Chaos Solitons Fractals 39(5), 2138–2149 (2009)
Shi, Y., Xing, Q.: Dense distribution of chaotic maps in continuous map spaces. Dyn. Stab. Syst. 26(4), 519–535 (2011)
School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo, Henan, China
Haihong Guo & Wei Liang
Haihong Guo
Wei Liang
WL contributed to the idea of this paper, wrote the manuscript, and revised it. HG proved the theorems and wrote this paper. Both authors read and approved the final manuscript.
Correspondence to Wei Liang.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Guo, H., Liang, W. Existence of chaos for partial difference equations via tangent and cotangent functions. Adv Differ Equ 2021, 1 (2021). https://doi.org/10.1186/s13662-020-03162-2
Partial difference equation
Li–Yorke chaos
Devaney chaos | CommonCrawl |
BMC Pregnancy and Childbirth
Prevalence, trend and determinants of adolescent childbearing in Burundi: a multilevel analysis of the 1987 to 2016–17 Burundi Demographic and Health Surveys data
Jean Claude Nibaruta1,
Bella Kamana2,
Mohamed Chahboune1,
Milouda Chebabe1,
Saad Elmadani1,
Jack E. Turman Jr.3,
Morad Guennouni1,
Hakima Amor4,
Abdellatif Baali4 &
Noureddine Elkhoudri1
BMC Pregnancy and Childbirth volume 22, Article number: 673 (2022) Cite this article
Very little is known about factors influencing adolescent childbearing despite an upward trend in adolescent childbearing prevalence in Burundi, and its perceived implications on the rapid population growth and ill-health of young mothers and their babies. To adress this gap, this study aimed to examine the prevalence, trends and determinants of adolescent childbearing in Burundi.
Secondary analyses of the 1987, 2010 and 2016–17 Burundi Demographic and Health Surveys (BDHS) data were conducted using STATA. Weighted samples of 731 (1987 BDHS), 2359 (2010 BDHS) and 3859 (2016-17BDHS) adolescent girls aged 15–19 years old were used for descriptive and trend analyses. Both bivariable and multivariable two-level logistic regression analyses were performed to identify the main factors associated with adolescent childbearing using only the 2016–17 BDHS data.
The prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17. Factors such as adolescent girls aged 18–19 years old (aOR =5.85, 95% CI: 3.54–9.65, p < 0.001), adolescent illiteracy (aOR = 4.18, 95% CI: 1.88–9.30, p < 0.001), living in poor communities (aOR = 2.19, 95% CI: 1.03–4.64, p = 0.042), early marriage (aOR = 9.28, 95% CI: 3.11–27.65, p < 0.001), lack of knowledge of any contraceptive methods (aOR = 5.33, 95% CI: 1.48–19.16, p = 0.010), and non-use of modern contraceptive methods (aOR = 24.48, 95% CI: 9.80–61.14), p < 0.001) were associated with higher odds of adolescent childbearing. While factors such as living in the richest household index (aOR = 0.52, 95% IC: 0.45–0.87, p = 0.00), living in West region (aOR = 0.26, 95%CI: 0.08–0.86, p = 0.027) or in South region (aOR = 0.31, 95% CI: 0.10–0.96, p = 0.041) were associated with lower odds of adolescent childbearing.
Our study found an upward trend in adolescent childbearing prevalence and there were significant variations in the odds of adolescent childbearing by some individual and community-level factors. School-and community-based intervention programs aimed at promoting girls' education, improving socioeconomic status, knowledge and utilization of contraceptives and prevention of early marriage among adolescent girls is crucial to reduce adolescent childbearing in Burundi.
The World Health Organization (WHO) and United Nations entities define an adolescent as an individual aged 10–19 years [1, 2]. Adolescent childbearing is a major global public health issue because of its many adverse health and socio-economic consequences for both young mothers and their babies, particularly in Sub-Saharan Africa (SSA) [3, 4]. While adolescent childbearing declined significantly overall since 2004 [5], significant disparities persist between and within countries and among population groups, particularly in SSA [3, 6,7,8]. In 2015–2020, SSA had the highest levels of adolescent childbearing, followed by Asia and Latin America and the Caribbean [6]. Almost one-fifth (18.8%) of adolescent girls got pregnant in Africa, and a higher prevalence (21.5%) was observed in the East African sub-region where Burundi is located [3]. Several studies state that adolescent childbearing is associated with higher maternal mortality and morbidity and adverse child outcomes including a higher prevalence of low birth weight and higher perinatal and neonatal mortality as compared to older women [3, 4, 9]. Adolescent early initiation into childbearing lengthens the reproductive period and subsequently increases a woman's lifetime fertility rate, contributing to rapid population growth [10,11,12].
The Burundian population is characterized by its extreme youth, with 65% under the age of 25 and almost a quarter of this growing population (23%) are adolescents [13]. In Burundi, adolescent childbearing remains an important issue because of its perceived implications on the rapid population growth and ill-health of adolescent mothers and their babies [11]. According to the report of the latest Burundi Demographic and Health Survey (BDHS) [14], 8% of women aged 15–19 begun childbearing, including 6% who had at least one live birth and 2% who were pregnant with their first child. Despite a good progress in reducing maternal mortality ratio [14], a large number of adolescent girls are still dying from pregnancy and childbirth related complications. The maternal mortality rate among Burundian adolescent girls is estimated at 150 maternal deaths per 1000 women aged 15–19 years [14]. Maternal disorders are the fourth highest cause of death among teenage mothers in Burundi [13]. Early marriage and adolescent pregnancy could lead to or aggravate anemia in mothers and result in low iron stores in the offspring [15], or in prematurity or low birth weight babies [16]. Approximately 36% of Burundian adolescent girls are anemic and 0.4% have obstetric fistula [14]. On the other hand, the infant mortality rate among adolescent girls in Burundi is estimated at 59 deaths per 1000 live births, of which 30% are neonatal and 29% post-neonatal [14]. In addition, the prevalence of low birth weight is higher among adolescent mothers (7.2%) than among women aged 20–34 years (4.7%) [14].
Several studies were conducted to examine the factors influencing adolescent pregnancy and motherhood in various settings. The results of these studies showed that early marriage or sexual intercourse [4, 7, 9], illiteracy or low level of education and poverty [3, 7, 9, 10] or living in poor neighborhoods [17, 18], age of the adolescent [4, 10, 19], marital status [3, 4, 10], rural residence and geographic regions [3, 4, 10, 20] are important factors influencing adolescent childbearing. Despite an upward trend in adolescent childbearing prevalence and its perceived implications on the rapid population growth and poor health of young mothers and their babies, very little is known about factors influencing adolescent childbearing in Burundi [21,22,23]. Only two BDHS reports [14, 24] containing information on factors influencing adolescent childbearing are available in Burundi. The results of these two surveys are limited to a few determinants of adolescent childbearing and are fully descriptive, and therefore do not make it possible to know the net effect of each of the factors influencing adolescent childbearing in the Burundian settings. To adress this gap, we aim to examine the prevalence, trend and determinants of adolescent childbearing using the 1987 to 2016–17 BDHS data.
Data sources and population
This study used adolescent women (aged 15–19) data extracted from the three BDHS conducted in 1987 [25], 2010 [24] and 2016–2017 [14] for descriptive statistics and the trend of adolescent childbearing assessment. For the second objective of identifying factors associated with adolescent childbearing, only adolescent women data from the most recent BDHS [14] were used. The BDHS are nationally representative surveys with samples based on a two-stage stratified sampling procedure: Enumeration areas or clusters in the first stage and households in the second stage. In sampled households, all women aged between 15 and 49 years who consent to participate in the survey are interviewed. Then 731, 2359, and 3859 adolescent women aged 15–19 years were successfully interviewed during the 1987, 2010 and 2016–17 BDHS surveys respectively. Thus, the current study used three weighted samples of 731, 2359, and 3859 adolescent women aged 15–19 years. A detailed description of the sampling procedure for each of these three surveys is presented in the final report for each survey [14, 24, 25].
Variables of the study
Outcome variable
The outcome variable of interest in this study is adolescent childbearing, which refers to the sum of the percentage of adolescents aged 15–19 who are already mothers (have had at least a live birth) and the percentage of adolescents who are pregnant with their first child at the time of the interview [4, 26]. Thus, any adolescent who was already a mother or pregnant with her first child was coded one (1) and zero (0) in the opposite case.
Based on a prior literature review, our independent variables were classified into individual-level factors and community-level factors. The individual-level factors include: adolescent's age, education, household wealth index, working status, religion, access to mass media, age at first marriage, knowledge of any contraceptive methods, and modern contraceptive use. Community-level factors include: place of residence, health regions, community-level education, and community-level poverty. It should be noted that of the four community-level variables, two variables (community-level education, and community-level poverty) were created by aggregating individual-level factors (adolescent's education, and household wealth index) since these two variables are not directly found from the 2016–17 BDHS dataset.
Operational definitions
Access to mass media
Created by combining the following three variables: frequencies of listening to radio, watching TV, and reading newspapers and coded as "yes" if the adolescent was exposed to at least one of the three media and "no" in the opposite case.
Health regions
This variable had eighteen categories corresponding to the eighteen current provinces of Burundi. To reduce its excessive number of categories, it was recoded into five regions such as North Region, Central-East Region, West Region, South Region and Bujumbura Mairie [11].
Community-level education
Aggregate values measured by the proportion of adolescents with a minimum of primary level education derived from data on an adolescent's education. Then, it was categorized using national median value to values: low (communities with < 50% of adolescents have at least primary education) and high (communities with ≥50% of adolescents have at least primary education) community-level of adolescent education.
Community-level poverty
Aggregate values measured by the proportion of adolescents living in households classified as poorest/poorer derived from data on household wealth index. Then, it was categorized using national median value to values: low (communities with < 50% of adolescents living in poorest/poorer households) and high (communities with ≥50% of adolescents living in poorest/poorer households) community-level of adolescent poverty.
Data management and statistical analysis
After data were extracted, recoded and reorganized, the statistical analysis was performed using STATA statistical software version 14.2. During all statistical analyses, the weighted samples were used to adjust for non-proportional sample selection and for non responses to ensure that our results were nationally representative. Frequency and percentage were used to describe the sociodemographic characteristics as well as the sexual and reproductive health history of the sample across the three surveys. The trend analysis of adolescent childbearing was evaluated using the Extended Mantel-Haenszel chi square test for linear trend using the OpenEpi (Version 3.01)- Dose Response program [4, 27]. A p-value ≤0.05 was used to declare the existence of a significant trend.
During the BDHS data collection, two-stage stratified cluster sampling procedures were used and therefore the data were hierarchical. To obtain correct estimates in inferential analyses, advanced statistical models such as multilevel modeling that considers independent variables measured at individual- and community-levels should be used to account for the clustering effect/dependency [28,29,30,31]. Thus, bivariable and multivariable multilevel logistic regression analyses were conducted to identify factors associated with adolescent childbearing by using only the most recent BDHS [14]. We first performed the bivariable multilevel logistic regression analysis to examine associations between adolescent childbearing and the selected individual and community-level variables. Then variables with a p-value ≤0.2 in the bivariate analysis were included in the multivariable multilevel logistic regression analysis to assess the net effects of each independent variable on adolescent childbearing after adjusting for potential confounders. The fixed effects were reported in terms of adjusted odds ratios (aOR) with 95% confidence intervals (CI) and p-values. Variables with p-value < 0.05 were declared to be significantly associated with adolescent childbearing in the multivariate analysis.
Before performing these multilevel logistic regression analyses, an empty model was conducted to calculate the extent of variability in adolescent childbearing between clusters (between communities). The existence of this variability was assessed using the Intra-Class correlation Coefficient (ICC) and the Median Odds Ratio (MOR) [29,30,31,32]. The ICC represents the proportion of the between-cluster variation in the total variation (the between- plus the within-Cluster variation) of the chances of adolescent childbearing [28, 29]. It can be computed with the following formula:
$${\displaystyle \begin{array}{c}\mathrm{ICC}={\sigma}^2/\left({\sigma}^2+{\pi}^2/3\right)\\ {}.=\frac{\sigma^2}{\sigma^2+3.29},\mathrm{where}\ {\sigma}^2\mathrm{represents}\ \mathrm{the}\ \mathrm{cluster}\ \mathrm{variance}\end{array}}$$
The MOR is the Median values of the Odds Ratio of the cluster at high risk and cluster at lower risk of adolescent childbearing when randomly picking two adolescent women from two different clusters [29, 30] . It can be computed with the following formula:
$$\mathrm{MOR}=\exp\ \left[\sqrt{\Big(2\times {\sigma}^2}\right)\times 0.6745\Big]$$
$$\mathrm{MOR}\cong \exp\ \left(0.95\times \sqrt{\sigma^2}\right)$$
The deviance (or-2Log likelihood), Akaike Information Criteria (AIC) and Bayesian Information Criterion (BIC) were used to compare the fit to the data of the null model and that of the full model where we favored model with smaller values of these indices [4, 30, 33].
Sociodemographic characteristics of samples
The sociodemographic characteristics of the adolescents included in the three surveys are summarized in Table 1. The analysis of adolescents' age showed that the majority of them (53.4, 61.1 and 64.5% in the 1987, 2010 and 2016–17 BDHS respectively) were between 15 and 17 years old. Similarly, most of participants resided in rural areas: 95.7% (1987 BDHS), 88.4% (2010 BDHS) and 85.8% (2016–17 BDHS). A large proportion of adolescents (75.8 and 76.5% in the 2010 and 2016–17 BDHS respectively) lived in three health regions (North, Central-East and South). Similarly, most adolescent girls were still single: 93.2% (1987 BDHS), 90.2% (2010 BDHS) and 93.3% (2016–17 BDHS). The proportion of illiterate adolescents decreased from 73.3% (1987 BDHS) to 7.3% (2016–17 BDHS). On the other hand, the percentages of adolescents who were currently working increased from 7.5% (1987 DHS) to 57.6% (2016-17DHS). More than half of adolescent girls (58.5 and 53.6% in the 2010 and 2016–17 BDHS surveys respectively) were from very poor/poor/middle-income households. Similarly, analysis of religious affiliation showed that most adolescents were Catholic: 61.1% (2010 BDHS) and 55.7% (2016–17 BDHS).
Table 1 Sociodemographic characteristics of adolescents in Burundi using the 1987, 2010 and 2016/17 BDHS
Sexual and reproductive health characteristics of the samples
The percentage of adolescents who had their first sexual intercourse at age ≤ 14 years increased from 0.7% (1987 BDHS) to 2.6% (2016–17 BDHS). Similarly, the percentage of adolescents who had their first birth at age ≤ 17 years increased from 1.7% (1987 BDHS) to 3.3% (2016–17 BDHS). In contrast, the proportion of adolescents who had their first marriage at age ≤ 17 decreased slightly from 4% (1987 BDHS) to 3.8% (2016–17 BDHS). Similarly, 40.1% (1987 BDHS) of adolescents had knowledge of any contraceptive methods compared to 89.9% (2016–17 BDHS). The percentage of adolescents who do not intend to use contraception increased from 17.8% (2010 BHDS) to 24.8% (2016–17 BDHS). On the other hand, there was a reduction in the proportion of adolescents with unmet need for contraception, which decreased from 3.2% (2010 BDHS) to 2.5% (2016–17 BDHS). Regarding fertility preference, 5.8% (2010 BDHS) of adolescents wanted to have another pregnancy compared to 96.5% in the 2016–17 BDHS (See Table 2).
Table 2 Sexual and reproductive health characteristics of adolescents in Burundi using the 1987, 2010 and 2016/17 BDHS data
Prevalence and trends of adolescent childbearing
The prevalence and trends of adolescent childbearing were examined in its two components: prevalence and trend of adolescents who have had at least one live birth and prevalence and trend of those who were pregnant with their first child at the time of the survey (see Fig. 1). Thus, the prevalence of adolescent childbearing increased from 5.9% (95% CI: 4.3–7.8) in 1987 to 9.6% (95% CI: 8.4–10.4) in 2010, and then decreased from 9.6 to 8.3% (95% CI: 7.4–9.2) in 2016/17. The trend analysis shows that there was an increase of 2.4% from 1987 to 2016/17 although this increase was not statistically significant (P-value = 0.0503). Indeed, the prevalence of adolescents who have had at least one live birth increased from 3.2% (95% CI: 2.0–4.7) in 1987 to 6.7% (95% CI: 5.7–7.7) in 2010, and then decreased from 6.7 to 6.1% (95% CI: 5.3–6.8) in 2016/17. The trend analysis shows that there was an increase of 2.9% from 1987 to 2016/17 and this increase was statistically significant (P-value = 0.0036). On the other hand, the prevalence of adolescents who were pregnant with their first child increased from 2.7% (95% CI: 1.7–4.2) in 1987 to 2.9% (95% CI: 2.2–3.6) in 2010, and then decreased from 2.9 to 2.2% (95%CI: 1.7–2.7) in 2016/17. The trend analysis shows that there was a decrease of 0.5% from 1987 to 2016/17 but this decrease was not statistically significant (P-value = 0.3593).
Prevalence and trends of adolescent childbearing in Burundi using the 1987, 2010 and 2016–17 BDHS Data
Determinants of adolescents childbearing
Bivariable and multivariable multilevel logistic regression analyses were conducted to identify individual and community-level factors associated with adolescent childbearing by using only the most recent (2016–17) BDHS data. First, an empty model was performed to calculate the extent of variability in adolescent childbearing between clusters by using the ICC and the MOR indicators. The deviance, AIC, and BIC were also used to select the model that best fit the data. The results of bivariable and multivariable analyses, random effect model and model fitness are summarized in Table 3.
Table 3 Results of bivariable and multivariable multilevel logistic regression analyses of factors associated with adolescent childbearing in Burundi
According to the findings in Table 3, the ICC of the empty model was estimated to 20.2, which indicated that about 20.2% of the variations in adolescent childbearing were attributable to community differences. Similarly, the MOR of the empty model was estimated to 2.37, which means that if we randomly selected two adolescent girls from two different communities, the one from a higher risk community had 2.37 times higher odds of childbearing than the one from a lower risk community. The model fitness findings revealed the best-fitted model was the full model (model with individual and community-level factors) since it had significantly (p < 0.001) lower values of deviance (905.70), AIC (955.71), and BIC (1112.16) compared to those of the empty model. In the bivariable analysis, factors like adolescent's age, education, working status, household wealth index, religion, access to mass media, age at first marriage, knowledge of any contraceptive methods, modern contraceptive use, health regions and community-level poverty met the minimum criteria (p ≤ 0.2) to be included in the multivariable analysis.
In the multivariable analysis, only factors such as adolescent's age, adolescent's education, household wealth index, age at first marriage, knowledge of any contraceptive methods, modern contraceptive use, health regions, and community-level poverty remained significantly associated with adolescent childbearing. Indeed, adolescents aged 18–19 years had about 6 times higher odds (aOR =5.85, 95% CI: 3.54–9.65, p < 0.001) of childbearing than those aged 15–17 years. The odds of childbearing among adolescents who had no education was about 4 times higher (aOR = 4.18, 95% CI: 1.88–9.30, p < 0.001), and those who had only a primary education was about 2 times higher (aOR = 2.58, 95% CI: 1.54–4.25, p < 0.001) than adolescents who had a secondary or high education. The adolescents in the richest household quintile had 48% lower odds (aOR = 0.52, 95% IC: 0.45–0.87, p = 0.007) of childbearing compared to those in the poorest household quintile.
Similarly, the odds of childbearing among adolescents who got married at ≤17 years old was about 9 times higher (aOR = 9.28, 95% CI: 3.11–27.65, p < 0.001) than those who got married at the age between 18 and 19. Moreover, the adolescents who didn't have knowledge of any contraceptive methods had about 5 times higher odds (aOR = 5.33, 95% CI: 1.48–19.16, p = 0.010) of childbearing than those who had knowledge of any contraceptive methods. Similarly, the odds of childbearing among adolescents who were not using modern contraceptive methods was about 24 times higher (aOR = 24.48, 95% CI: 9.80–61.14), p < 0.001) than those who were using modern contraceptive methods. Also, the odds of childbearing among adolescents living in West, and those in South were about 74% (aOR = 0.26, 95%CI: 0.08–0.86), p = 0.027) and 69% (aOR = 0.31, 95% CI: 0.10–0.96, p = 0.041) times lower respectively than those living in Bujumbura Mairie. Finally, the odds of childbearing among adolescents living in high community-level poverty was about 2 times higher (aOR = 2.19, 95% CI: 1.03–4.64, p = 0.042) than those living in low community-level poverty.
This study aimed to analyze the prevalence, trend and determinants of adolescent childbearing in Burundi using data from the three DHS conducted in Burundi in 1987 [25], 2010 [24], and 2016–17 [14] respectively. Our findings showed that the prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17. Indeed, analysis of the trend in adolescent childbearing over a 30-year period (1987 to 2017) shows that there was an increase in adolescent childbearing between 1987 and 2010, which would likely be the result of the various consequences of the 1993–2005 civil war. These consequences include sexual violence [34], the increase in the poverty rate [13, 35, 36] and the gradual deterioration of social norms that prohibited pregnancy outside of marriage, especially in urban areas [37]. Afterwards, there was a slight decrease in adolescent childbearing between 2010 and 2017, which would be attributable to the general increase in education in Burundi since 1987 but especially since 2010 after the implementation of the free Primary School Policy (FPSP) by the Burundian government in 2005 [38]. However, the effect of this general increase in school enrollment (at the individual and especially at the community level) would have been mitigated by the increase in the poverty rate among households especially after the 2015 post-election crisis [39] as some girls opt for early marriage to escape the poor household conditions in the parental home [35], while others move alone to the cities, especially in Bujumbura Mairie, in search of work and are often vulnerable to sexual exploitation which puts them at high risk of becoming pregnant [34], the gradual deterioration of social norms that severely prohibited pregnancy outside of marriage especially in urban areas [37], and finally the difficulties of access/low utilization of family planning services by adolescents girls in Burundi [23, 40, 41]. Although this upward trend in adolescent childbearing was not statistically significant, Burundi should make greater efforts to reverse this trend given the negative impact of adolescent childbearing in Burundi on the young mothers and their babies' well-being [21, 34, 42] and on the current demographic pressure [11, 13]. Moreover, several studies showed that the high level of maternal and infant morbidity and mortality can be reduced by reducing the adolescent childbearing rates in developing countries [3, 43, 44]. In addition, Burundi should take as a good example most of its neighboring countries that are currently showing a downward trend in adolescent childbearing after having made enormous efforts [4, 7].
Our study identified some key determinants of adolescent childbearing in the Burundian settings. Indeed, our findings indicated that adolescents aged 18–19 years were more likely to start childbearing than those aged 15–17 years. This positive correlation between adolescent age and risk of childbearing could be explained by increased exposure to sexual intercourse and marriage as the age of adolescent increases [4, 10]. Our results are consistent with those of many previous studies [4, 7, 10] that showed that the odd of adolescent pregnancy increases with adolescent age.. However, it should be noted that the consequences of childbearing can be much more serious for 15–17 year old girls than for 18–19 year old girls, both in terms of their health (given their physical immaturity) and that of their babies, in terms of acceptance in the community given that the legal age of marriage in Burundi is 18, and in terms of an increase in their reproductive age which would contribute to a high fertility rate further exacerbating the demographic pressure in Burundi [11]. Therefore, intervention programs to reduce/prevent adolescent childbearing in Burundi should preferably target all age groups of adolescent girls.
Similarly, our results showed that adolescents who had no education were more likely to start childbearing than those who had a secondary or high education. Such an association could be explained by the fact that out-of-school adolescent girls do not have access to comprehensive sexuality education (CSE) [45] and skills necessary to negotiate sexuality and reproductive options [3]. The protective effect of education against adolescent childbearing has also been reported in several previous studies. Indeed, adolescents who had no education had about 2 times higher odds of childbearing compared to those who were in school [3]. Teenage girls who had no education had about 3 times higher odds of childbearing than those who had a secondary or high education [45] . Other similar results were reported in studies conducted in Malawi [10], and in five East African countries that do not include Burundi [7]. In Burundi, a significant increase in the school attendance rate, especially at the primary level, was observed following the implementation of the FPSP initiated by the Burundian government since 2005 [38]. However, there is still a gender gap in school attendance, especially at the secondary and higher levels [14, 38]. Moreover, CSE was certainly integrated into the education program in Burundi even in extracurricular school clubs [22]. However, this is not enough as the emphasis was placed on abstinence as the only accepted method for avoiding adolescent pregnancy [37, 38]. The information available on the benefits of using contraceptive methods would be also very limited to have a positive effect on girls' possibilities to protect themselves [22]. Furthermore, many adolescent girls are eventually forced to drop out of school because of the very poor living conditions in the parental home [35, 36] and face an increased risk of pregnancy while trying to provide for their basic needs themselves [34, 35, 38]. Given the importance of education, particularly at the secondary and tertiary levels, in preventing teenage childbearing, policymakers should do everything possible to promote young girls education at all levels of the Burundian education system while significantly improving the household socio-economic conditions and the quality of the CSE provided.
Our findings also revealed that household poverty or living in poor communities is associated with higher odds of adolescent childbearing. In the Burundian context, this association could be explained by the fact that Burundian society was highly affected economically by the civil war of 1993–2005 [34, 37]. Consequently, 64.9% of Burundians live below the national poverty line of US$1.27 and 38.7% live in extreme poverty [35, 36]. Thus, some rural adolescents arrive alone in cities in search of work and are often vulnerable to sexual exploitation, which exposes them to a high risk of unwanted pregnancies [34, 38]. On the other hand, some adolescent girls, especially those from rural areas, are eventually forced to drop out of school, either because they have no money to buy sanitary pads during menstruation or because they are unable to learn much without some food before school or at lunchtime [38]. Some malicious men (shopkeepers, drivers, teachers, etc.) take advantage of this precariousness to offer them money in exchange for sex, which often results in unwanted pregnancies [13, 22].. Our results corroborate those of the study by Vikat et al. [17] and those of the study by Kearney and his colleague [18]. Although the relationship between poverty and adolescent childbearing may be a vicious cycle [3], our findings and available evidence [7, 9, 13] underscore the importance of improving the households' socioeconomic status in general, but especially of disadvantaged communities, to reduce the prevalence of adolescent childbearing, thereby improving their sexual and reproductive health.
Unexpectedly, Bujumbura Mairie, which is generally considered less poor than other regions and where more youth have access to education [38], was found to be associated with a higher risk of adolescent pregnancy than other regions. This finding could be explained by two main reasons. The first is that in order to escape poor living conditions in parental households, some rural adolescents arrive alone in Bujumbura Mairie in search of work and are often vulnerable to sexual exploitation, which puts them at increased risk of becoming pregnant [34]. The second reason is that rural families are even more attached to social norms against out-of-wedlock pregnancies than urban families [34, 37]. Therefore, to escape the stigma of their families, some rural adolescents who experience an unwanted pregnancy prefer to move to Bujumbura Mairie as soon as possible before the family realizes that their daughter is pregnant.
This study also found that the adolescent early marriage is associated with a higher odd of childbearing. This link between early marriage and higher risk of adolescent childbearing could be justified by the fact that early marriage implies early sexual debut and therefore a major risk of early pregnancy and childbearing [7, 9, 46]. In addition, several previous studies [3, 4, 9, 46] reported similar results. In Burundi, early marriage is associated with not only young mothers' and their babies' poor health outcomes [14], but also with high fertility rate [11]. While the official age of marriage for girls in Burundi is 18, early marriage remains a common practice, especially in rural areas, as a way to escape poor living conditions in the parental home [35]. Therefore, the Burundian government should ensure the strict enforcement of any law aimed at combating early marriage while improving the socio-economic conditions of households. Indeed, apart from the findings of our study, several other researchers [3, 4, 46, 47] suggest that investing in the prevention of child marriage is important not only to reduce teenage pregnancies and related complications, but also to improve a country's economic development.
Similarly, our findings showed that both the lack of knowledge of any contraceptive methods and the non-use of modern contraceptive methods were associated with higher odds of adolescent childbearing. The positive influence of good knowledge and use of family planning services in preventing or reducing the rate of unintended pregnancies among adolescent girls has been widely reported in the scientific literature [9, 10, 42, 46]. However, most Burundian adolescent girls do not use contraception, and some do not even plan to use it in the future [14]. Indeed, the prevalence of contraceptive use among adolescent girls remains very low (2.5%) and the percentage of adolescents girls who do not intend to use contraception increased from 17.8% in 2010 to 24.8% in 2016–17. Moreover, the percentage of adolescents who had knowledge of any contraceptive methods decreased from 91.8% in 2010 to 89.9% in 2016–17 [14, 24]. The results of this study as well as the available evidence [46, 47] highlight the importance of interventions such as CSE [42] at all levels of the Burundian education system and provision of contraceptive services [48] to adolescents and creating supportive environments such as knowledge and support from parents, teachers, church, mass media campaign, governance, and a peer education program [42, 46] to reduce the prevalence of adolescent childbearing in Burundi. The strength of our study is that it would be among the first to focus on trend analyses and community-level factors in the analysis of determinants of adolescent childbearing in Burundi. In addition, this study is the first to use an advanced logistic regression model (multilevel model) to investigate the determinants of adolescent childbearing in Burundi. However, our study also suffers from some limitations. The 1987 DHS database did not contain some of the variables of interest to our study. Therefore, we limited ourselves to the analysis of the available variables. Moreover, the results of this study may suffer from misreporting bias regarding the respondents current ages. Indeed, respondents' ages may not always have been reported correctly, either intentionally by trying to report a higher age than the real age given the stigma surrounding adolescent pregnancy [21] and the legal consequences of early marriage, or by not knowing the real age given that Burundi has suffered from repeated outbreaks of mass violence and political crisis [34, 37] during which registration of birth dates in government records was often impossible [49]. In addition, our study looked only at current pregnancies or previous births of adolescents to assess the prevalence of adolescent childbearing and did not consider adolescent pregnancies that ended in miscarriage, abortion, or stillbirth. This consideration is very important in the interpretation of the results of this study by readers, as there may be an underestimation bias in the prevalence. Indeed, given the Burundian culture, which still considers pregnancy outside of marriage to be a disgrace to the family [21], many cases of induced and clandestine abortion are quite possible in Burundi, as was found in two recent studies conducted in two of Burundi's neighboring countries, in Uganda [50] and in Ethiopia [51], which showed that nearly one in six adolescent pregnancies ends in an induced and clandestine abortion. Further studies that include adolescent pregnancies that ended in miscarriage, abortion, or stillbirth in prevalence estimate are needed to better understand the extent of the problem in Burundi.
The prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17 although this increase was not statistically significant. There were variations in the odds of adolescent childbearing by some individual and community-level factors. Factors such as late adolescent age, adolescent illiteracy, household poverty or high community-level poverty, early marriage, lack of knowledge of any contraceptive methods, non-use of modern contraceptive methods, and living in Bujumbura Mairie were associated with higher odds of adolescent childbearing. School- and community- based intervention programs aimed at promoting girls' education and improving socioeconomic status, knowledge and utilization of contraceptives and prevention of early marriage among adolescent girls is crucial to reduce adolescent childbearing in Burundi.
The data that support the findings of this study are available for download upon a formal application from the DHS Program web site https://dhsprogram.com/data/available-datasets.cfm, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of the DHS Program.
AIC:
Akaike Information Criterion
aOR:
Adjusted Odds Ratio
BDHS:
Burundi Demographic and Health Survey
Bayesian Information Criterion
CSE:
FPSP:
Free Primary Schooling Policy
ICC:
Intra-Class correlation Coefficient
Median Odds Ratio
SSA:
WHO. Guidance on ethical considerations in planning and reviewing research studies on sexual and reproductive.pdf. Geneva: WHO; 2018.
Plummer ML, Baltag V, Strong K, Dick B, Ross DA, World Health Organization, et al. Global Accelerated Action for the Health of Adolescents (AA-HA!): guidance to support country implementation. 2017. Available from: http://apps.who.int/iris/bitstream/10665/255415/1/9789241512343-eng.pdf. Cited 2020 Feb 19
Kassa GM, Arowojolu AO, Odukogbe AA, Yalew AW. Prevalence and determinants of adolescent pregnancy in Africa: a systematic review and Meta-analysis. Reprod Health. 2018;15:195.
Kassa GM, Arowojolu AO, Odukogbe A-TA, Yalew AW. Trends and determinants of teenage childbearing in Ethiopia: evidence from the 2000 to 2016 demographic and health surveys. Ital J Pediatr. 2019;45:153.
World Bank, International Monetary Fund. Global Monitoring Report 2015/2016: Development Goals in an Era of Demographic Change. Washington, DC: World Bank; 2016.
United Nations. World Fertility 2019 : early and later childbearing among adolescent women (ST/ESA/SER.A/446). 2019. Available from: https://www.un.org/en/development/desa/population/publications/index.asp. Cited 2021 Jan 13
Wado YD, Sully EA, Mumah JN. Pregnancy and early motherhood among adolescents in five East African countries: a multi-level analysis of risk and protective factors. BMC Pregnancy Childbirth. 2019;19:59. Available on: https://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/s12884-019-2204-z.
WHO. Adolescent pregnancy. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-pregnancy. Cited 2021 Jan 12
World Health Organization. Regional Office for South-East Asia. Adolescent pregnancy situation in South-East Asia Region. Geneva: World Health Organization; 2015.
Palamuleni ME. Determinants of adolescent fertility in Malawi. Gend Behav. 2017;15:10126–41.
Nibaruta JC, Elkhoudri N, Chahboune M, Chebabe M, Elmadani S, Baali A, et al. Determinants of fertility differentials in Burundi: evidence from the 2016–17 Burundi demographic and health survey. PAMJ. 2021;38 Available from: https://www.panafrican-med-journal.com/content/article/38/316/full. Cited 2021 Apr 2.
Islam MM. Adolescent childbearing in Bangladesh. Asia Pacific Population Journal Economic and social commission for Asia and the pacific. 1999;14:73–87.
Rasmussen B, Sheehan P, Sweeny K, Symons J, Maharaj N, Kumnick M, et al. Adolescent Investment Case in Burundi: Estimating the Impacts of Social Sector Investments for adolescents. Bujumbura: Burundi: UNICEF Burundi; 2019.
Ministère à la Présidence chargé de la Bonne Gouvernance et du Plan (MPBGP), Ministère de la Santé Publique et de la Lutte Contre le Sida (MSPLS), Institut de Statistiques et d'Études Économiques du Burundi (ISTEEBU), ICF. Troisième Enquête Démographique et de Santé 2016–2017. Bujumbura, Burundi: ISTEEBU, MSPLS, and ICF.; 2017. 679. Available from: https://dhsprogram.com/publications/publication-FR335-DHS-Final-Reports.cfm
Kalaivani K. Prevalence & consequences of anaemia in pregnancy. Indian J Med Res Citeseer. 2009;130:627–33.
Ahmad MO, Kalsoom U, Sughra U, Hadi U, Imran M. Effect of maternal anaemia on birth weight. J Ayub Med Coll Abbottabad. 2011;23:77–9.
Vikat A, Rimpelä A, Kosunen E, Rimpelä M. Sociodemographic differences in the occurrence of teenage pregnancies in Finland in 1987–1998: a follow up study. J Epidemiol Community Health BMJ Publishing Group Ltd. 2002;56:659–68.
Kearney MS, Levine PB. Why is the teen birth rate in the United States so high and why does it matter? J Econ Perspect. 2012;26:141–63.
Gideon R. Factors associated with adolescent pregnancy and fertility in Uganda: analysis of the 2011 demographic and health survey data. Am J Sociol Res. 2013;3:30–5.
Neal S, Ruktanonchai C, Chandra-Mouli V, Matthews Z, Tatem AJ. Mapping adolescent first births within three East African countries using data from demographic and health surveys: exploring geospatial methods to inform policy. Reprod Health BioMed Central. 2016;13:1–29.
Ruzibiza Y. 'They are a shame to the community … ' stigma, school attendance, solitude and resilience among pregnant teenagers and teenage mothers in Mahama refugee camp, Rwanda. Glob Public Health. 2021;16:763–74.
Munezero D, Bigirimana J. Jont program "Menyumenyeshe" for improving sexual and reproductive health of adolescents and youth in Burundi. Bujumbura: Ministry of Public health and for fighting against Aids; 2017. p. 120. Available from: http://www.careevaluations.org/evaluation/improving-sexual-and-reproductive-health-of-adolescents-and-youth-in-burundi/
French H. How the "joint program" intervention should or might improve adolescent pregnancy in Burundi, how these potential effects could be encouraged, and where caution should be given; 2019.
Institut de Statistiques et d'Études Économiques du Burundi (ISTEEBU), Ministère de la Santé Publique et de la Lutte, contre le Sida (MSPLS), ICF International. Enquête Démographique et de Santé 2010. Bujumbura, Burundi: ISTEEBU, MSPLS, et ICF International.; 2012. 419. Available from: https://dhsprogram.com/publications/publication-FR253-DHS-Final-Reports.cfm
Segamba L, Ndikumasabo V, Makinson C, Ayad M. Enquête Démographique et de Santé au Burundi 1987. Columbia: Ministère de l'Intérieur Département de la Population/Burundi and Institute for Resource Development/Westinghouse; 1988. p. 385. Available from: https://dhsprogram.com/publications/publication-FR6-DHS-Final-Reports.cfm
Croft TN, Marshall AM, Allen CK, Arnold F, Assaf S, Balian S. Guide to DHS statistics; 2018. p. 645.
Dean AG, Sullivan KM, Soe MM. OpenEpi: open source epidemiologic statistics for public health, version 3.01. www.OpenEpi.com, updated 2013/04/06. 2013; Available from: http://www.openepi.com/DoseResponse/DoseResponse.htm
Sommet N, Morselli D. Keep calm and learn multilevel logistic modeling: a simplified three-step procedure using Stata, R, Mplus, and SPSS. Int Rev Soc Psychol. 2017;30:203–18.
Merlo J, Chaix B, Ohlsson H, Beckman A, Johnell K, Hjerpe P, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: using measures of clustering in multilevel logistic regression to investigate contextual phenomena. J Epidemiol Community Health. 2006;60:290–7.
Tesema GA, Worku MG. Individual-and community-level determinants of neonatal mortality in the emerging regions of Ethiopia: a multilevel mixed-effect analysis. BMC Pregnancy Childbirth. 2021;21:12.
Teshale AB, Tesema GA. Determinants of births protected against neonatal tetanus in Ethiopia: a multilevel analysis using EDHS 2016 data. Das JK, editor. Plos One. 2020;15:e0243071.
Tessema ZT, Tamirat KS. Determinants of high-risk fertility behavior among reproductive-age women in Ethiopia using the recent Ethiopian demographic health survey: a multilevel analysis. Trop Med Health BioMed Central. 2020;48:1–9.
Heck RH, Thomas S, Tabata L. Multilevel modeling of categorical outcomes using IBM SPSS: Routledge Academic; 2013. Available from: https://books.google.fr/books?id=PJsTMAuPv6kC&hl=fr&source=gbs_book_other_versions
Sommers M. Adolescents and violence: lessons from Burundi. Belgium: Belgique: Universiteit Antwerpen, Institute of Development Policy (IOB); 2013.
Berckmoes L, White B. Youth, farming and Precarity in rural Burundi. Eur J Dev Res. 2014;26:190–203.
Tokindang J, Bizabityo D, Coulibaly S, Nsabimana J-C. Profil et déterminants de la pauvret : Rapport de l'enquête sur les Conditions de Vie et des Ménages (ECVMB-2013/2014). Bujumbura: Institut de Statistiques et d'Études Économiques du Burundi; 2015. p. 91.
Schwarz J, Merten S. 'The body is difficult': reproductive navigation through sociality and corporeality in rural Burundi. Cult Health Sex. 2022;10:1–16.
Cieslik K, Giani M, Munoz Mora JC, Ngenzebuke RL, Verwimp P. Inequality in education, school-dropout and adolescent lives in Burundi. Brussels: UNICEF-Burundi/Université Libre de Bruxelles; 2014.
Arieff A. Burundi's Electoral Crisis: In Brief. Washington, DC: Congressional Research Service; 2015.
Westeneng J, Reis R, Berckmoes LH, Berckmoes LH. The effectiveness of sexual and reproductive health education in Burundi: policy brief. Paris: UNESCO; 2020.
Nzokirishaka A, Itua I. Determinants of unmet need for family planning among married women of reproductive age in Burundi: a cross-sectional study. Contracept Reprod Med. 2018;3:11.
Hindin MJ, Kalamar AM, Thompson T, Upadhyay UD. Interventions to prevent unintended and repeat pregnancy among young people in low-and middle-income countries: a systematic review of the published and gray literature. J Adolesc Health Elsevier. 2016;59:S8–15.
Nove A, Matthews Z, Neal S, Camacho AV. Maternal mortality in adolescents compared with women of other ages: evidence from 144 countries. Lancet Global Health Elsevier. 2014;2:e155–64.
Olausson PO, Cnattingius S, Haglund B. Teenage pregnancies and risk of late fetal death and infant mortality. BJOG. Wiley Online Library. 1999;106:116–21.
Islam MM, Islam MK, Hasan MS, Hossain MB. Adolescent motherhood in Bangladesh: trends and determinants. Khan HTA, editor. Plos One. 2017;12:e0188294.
WHO. Preventing early pregnancy and poor reproductive outcomes among adolescents in developing countries: What the evidence says? Geneva: World Health Organization; 2011. https://www.who.int/publications-detail-redirect/9789241502214. Accessed 31 Aug 2022.
WHO. WHO recommendations on adolescent sexual and reproductive health and rights. Geneva: World Health Organization; 2018. https://www.who.int/publications-detail-redirect/9789241514606.
Darroch JE, Woog V, Bankole A, Ashford LS, Points K. Costs and benefits of meeting the contraceptive needs of adolescents; 2016.
Isteebu. Recensement Général de la Population et de l'Habitat au Burundi en 2008. Bujumbura, Burundi: Institut de Statistiques et d'Études Économiques du Burundi; 2008. Available from: https://www.isteebu.bi/rgph-2008/
Sully EA, Atuyambe L, Bukenya J, Whitehead HS, Blades N, Bankole A. Estimating abortion incidence among adolescents and differences in postabortion care by age: a cross-sectional study of postabortion care patients in Uganda. Contraception Elsevier. 2018;98:510–6.
Sully E, Dibaba Y, Fetters T, Blades N, Bankole A. Playing it safe: legal and clandestine abortions among adolescents in Ethiopia. J Adolesc Health Elsevier. 2018;62:729–36.
DHS Program. The DHS Program - Request Access to Datasets. The Demographic and health surveys Program. 2020. Available from: https://dhsprogram.com/data/new-user-registration.cfm. Cited 2020 Apr 21
We extend our sincere thanks to the Measure DHS program for granting permission to access and use the 1987, 2010 and 2016/17 BDHS data for this study.
This study is financed by the Burundian government through the scholarship granted to Mr. Jean Claude Nibaruta under contract No.611/BBES/0134/12/2017/2018 within the framework of his PhD studies in Morocco. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of this manuscript.
Hassan First University of Settat, Higher Institute of Health Sciences, Laboratory of Health Sciences and Technologies, Settat, Morocco
Jean Claude Nibaruta, Mohamed Chahboune, Milouda Chebabe, Saad Elmadani, Morad Guennouni & Noureddine Elkhoudri
Hassan II University, Ibn Rochd University Hospital of Casablanca, Haematology laboratory, Casablanca, Morocco
Bella Kamana
Indiana University, Richard M. Fairbanks School of Public Health, Departments of Social and Behavioral Sciences, Indianapolis, IN, USA
Jack E. Turman Jr.
Cadi Ayyad University of Marrakech, Semlalia Faculty of Science, Departments of Biology, Marrakech, Morocco
Hakima Amor & Abdellatif Baali
Jean Claude Nibaruta
Mohamed Chahboune
Milouda Chebabe
Saad Elmadani
Morad Guennouni
Hakima Amor
Abdellatif Baali
Noureddine Elkhoudri
JCN and NK conceived the idea and design and contributed in data analysis, interpretation of results, discussion and manuscript drafting. BK, MG, MC and MC substantively contributed in discussion and manuscript drafting. SM was a major contributor in data analysis and interpretation of results. While JET, HA and AB advised on data analysis and substantively revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Jean Claude Nibaruta.
The 1987, 2010, and 2016–17 survey protocols, consent forms, and data collection instruments were reviewed and approved by the National Ethics Committee for the Protection of Human Beings Participating in Biomedical and Behavioral Research in Burundi and the Institutional Review Board of ICF International. In addition, data were collected after informed consent was obtained from the participants and all information was kept confidential. For this study, permission was given by the MEASURE DHS program to access and download the three datasets after reviewing a short summary of this study submitted to the MEASURE DHS program via its website [52]. All the three datasets were treated with confidentiality and all methods were carried out in accordance with relevant guidelines and regulations.
Nibaruta, J.C., Kamana, B., Chahboune, M. et al. Prevalence, trend and determinants of adolescent childbearing in Burundi: a multilevel analysis of the 1987 to 2016–17 Burundi Demographic and Health Surveys data. BMC Pregnancy Childbirth 22, 673 (2022). https://doi.org/10.1186/s12884-022-05009-y
DOI: https://doi.org/10.1186/s12884-022-05009-y
Childbearing
Submission enquiries: [email protected] | CommonCrawl |
Dynamic inverse problem for Jacobi matrices
IPI Home
A variational gamma correction model for image contrast enhancement
June 2019, 13(3): 449-460. doi: 10.3934/ipi.2019022
CT image reconstruction on a low dimensional manifold
Wenxiang Cong 1, , Ge Wang 1, , Qingsong Yang 1, , Jia Li 2, , Jiang Hsieh 3, and Rongjie Lai 4,,
Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
GE Healthcare Technologies, Waukesha, WI 53188, USA
Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
* Corresponding authors: Rongjie Lai
Received January 2018 Revised January 2019 Published March 2019
Fund Project: W. Cong, G. Wang and Q. Yang's work is partially supported by the National Institutes of Health Grant NIH/NIBIB R01 EB016977 and U01 EB017140. R. Lai's work is partially supported by the National Science Foundation NSF DMS-1522645 and an NSF CAREER Award DMS-1752934
Figure(7)
The patch manifold of a natural image has a low dimensional structure and accommodates rich structural information. Inspired by the recent work of the low-dimensional manifold model (LDMM), we apply the LDMM for regularizing X-ray computed tomography (CT) image reconstruction. This proposed method recovers detailed structural information of images, significantly enhancing spatial and contrast resolution of CT images. Both numerically simulated data and clinically experimental data are used to evaluate the proposed method. The comparative studies are also performed over the simultaneous algebraic reconstruction technique (SART) incorporated the total variation (TV) regularization to demonstrate the merits of the proposed method. Results indicate that the LDMM-based method enables a more accurate image reconstruction with high fidelity and contrast resolution.
Keywords: CT image reconstruction, filtered backprojection (FBP), simultaneous algebraic reconstruction technique (SART), total variation (TV), low dimensional manifold model (LDMM).
Mathematics Subject Classification: Primary: 68U10; Secondary: 65K10, 65K05.
Citation: Wenxiang Cong, Ge Wang, Qingsong Yang, Jia Li, Jiang Hsieh, Rongjie Lai. CT image reconstruction on a low dimensional manifold. Inverse Problems & Imaging, 2019, 13 (3) : 449-460. doi: 10.3934/ipi.2019022
T. Brox, O. Kleinschmidt and D. Cremers, Efficient nonlocal means for denoising of textural patterns, IEEE Transactions on Image Processing, 17 (2008), 1083-1092. doi: 10.1109/TIP.2008.924281. Google Scholar
A. Buades, B. Coll and J.-M. Morel, A non-local algorithm for image denoising, In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 60–65. IEEE, 2005.Google Scholar
E. J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52 (2006), 489-509. doi: 10.1109/TIT.2005.862083. Google Scholar
E. J. Candes, J. K. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59 (2006), 1207-1223. doi: 10.1002/cpa.20124. Google Scholar
G.-H. Chen, J. Tang and S. Leng, Prior image constrained compressed sensing (piccs): a method to accurately reconstruct dynamic ct images from highly undersampled projection data sets, Medical Physics, 35 (2008), 660-663. doi: 10.1118/1.2836423. Google Scholar
I. A. Elbakri and J. A. Fessler, Statistical image reconstruction for polyenergetic x-ray computed tomography, IEEE Transactions on Medical Imaging, 21 (2002), 89-99. doi: 10.1109/42.993128. Google Scholar
H. Gao, H. Yu, S. Osher and G. Wang, Multi-energy ct based on a prior rank, intensity and sparsity model (prism), Inverse Problems, 27 (2011), 115012, 22pp. doi: 10.1088/0266-5611/27/11/115012. Google Scholar
G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Modeling & Simulation, 7 (2008), 1005-1028. doi: 10.1137/070698592. Google Scholar
S. Ha and K. Mueller, Low dose ct image restoration using a database of image patches, Physics in Medicine & Biology, 60 (2015), 869-882. doi: 10.1088/0031-9155/60/2/869. Google Scholar
A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, IEEE press New York, 1988. Google Scholar
Z. Li, Z. Shi and J. Sun, Point integral method for solving poisson-type equations on manifolds from point clouds with convergence guarantees, Communications in Computational Physics, 22 (2017), 228-258. doi: 10.4208/cicp.111015.250716a. Google Scholar
B. De Man, J. Nuyts, P. Dupont, G. Marchal and P. Suetens, An iterative maximum-likelihood polychromatic algorithm for ct, IEEE Transactions on Medical Imaging, 20 (2001), 999-1008. Google Scholar
B. De Man, S. Basu, N. Chandra, B. Dunham, P. Edic, M. Iatrou, S. McOlash, P. Sainath, C. Shaughnessy, B. Tower, et al., Catsim: a new computer assisted tomography simulation environment, In Medical Imaging 2007: Physics of Medical Imaging, volume 6510, page 65102G. International Society for Optics and Photonics, 2007.Google Scholar
S. Osher, Z. Shi and W. Zhu, Low dimensional manifold model for image processing, SIAM Journal on Imaging Sciences, 10 (2017), 1669-1690. doi: 10.1137/16M1058686. Google Scholar
S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Modeling & Simulation, 4 (2005), 460-489. doi: 10.1137/040605412. Google Scholar
G. Peyré, Manifold models for signals and images, Computer Vision and Image Understanding, 113 (2009), 249-260. Google Scholar
Y. Quan, H. Ji and Z. Shen, Data-driven multi-scale non-local wavelet frame construction and image recovery, Journal of Scientific Computing, 63 (2015), 307-329. doi: 10.1007/s10915-014-9893-2. Google Scholar
L. Ritschl, F. Bergner, C. Fleischmann and M. Kachelrieß, Improved total variation-based ct image reconstruction applied to clinical data, Physics in Medicine & Biology, 56 (2011), 1545-1561. doi: 10.1088/0031-9155/56/6/003. Google Scholar
E. Y. Sidky, Y. Duchin, X. Pan and C. Ullberg, A constrained, total-variation minimization algorithm for low-intensity x-ray ct, Medical Physics, 38 (2011), S117–S125. doi: 10.1118/1.3560887. Google Scholar
J. Tang, B. E. Nett and G.-H. Chen, Performance comparison between total variation (tv)-based compressed sensing and statistical iterative reconstruction algorithms, Physics in Medicine & Biology, 54 (2009), 5781–5804. doi: 10.1088/0031-9155/54/19/008. Google Scholar
Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh and G. Wang, Low-dose x-ray ct reconstruction via dictionary learning, IEEE Transactions on Medical Imaging, 31 (2012), 1682-1697. Google Scholar
X. Zhang and T. F. Chan, Wavelet inpainting by nonlocal total variation, Inverse problems and Imaging, 4 (2010), 191-210. doi: 10.3934/ipi.2010.4.191. Google Scholar
Figure 1. The patch manifold of a CT image (left) and the corresponding dimension function of the patch manifold with patch size $ 16\times 16 $ (right)
Figure 2. Comparison of image reconstruction. (a) Ground truth CT images, (b) the reconstructed image using the LDMM-based method, and (c) the reconstructed image using SART with TV
Figure 3. Profiles of reconstructed image. (a) The profiles along the vertical midlines in the phantom and image reconstructed by LDMM-based reconstruction method, (b) the profiles along the horizontal midlines in the phantom and image reconstructed by LDMM-based reconstruction method. (c) The profiles along the vertical midlines in the phantom and image reconstructed by SART+TV reconstruction method, and (d) the profiles along the horizontal vertical midlines in the phantom and image reconstructed by SART+TV reconstruction method
Figure 4. The sinogram simulated from CatSim
Figure 6. The sinogram measured from a clinical x-ray CT scanner
Figure 5. Comparison of CT reconstruction. (a) Ground truth CT images, (b) the reconstructed image using the LDMM-based image reconstruction method, and (c) the reconstructed image using SART with TV
Figure 7. Comparison of CT image reconstructions from clinical CT raw data. (a) The reconstructed image using the LDMM-based method, (b) the reconstructed image using SART with TV, and (c) the reconstructed image using FPB
Lacramioara Grecu, Constantin Popa. Constrained SART algorithm for inverse problems in image reconstruction. Inverse Problems & Imaging, 2013, 7 (1) : 199-216. doi: 10.3934/ipi.2013.7.199
Yunhai Xiao, Junfeng Yang, Xiaoming Yuan. Alternating algorithms for total variation image reconstruction from random projections. Inverse Problems & Imaging, 2012, 6 (3) : 547-563. doi: 10.3934/ipi.2012.6.547
Zhengmeng Jin, Chen Zhou, Michael K. Ng. A coupled total variation model with curvature driven for image colorization. Inverse Problems & Imaging, 2016, 10 (4) : 1037-1055. doi: 10.3934/ipi.2016031
Baoli Shi, Zhi-Feng Pang, Jing Xu. Image segmentation based on the hybrid total variation model and the K-means clustering strategy. Inverse Problems & Imaging, 2016, 10 (3) : 807-828. doi: 10.3934/ipi.2016022
Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010
Li Shen, Eric Todd Quinto, Shiqiang Wang, Ming Jiang. Simultaneous reconstruction and segmentation with the Mumford-Shah functional for electron tomography. Inverse Problems & Imaging, 2018, 12 (6) : 1343-1364. doi: 10.3934/ipi.2018056
Mila Nikolova. Model distortions in Bayesian MAP reconstruction. Inverse Problems & Imaging, 2007, 1 (2) : 399-422. doi: 10.3934/ipi.2007.1.399
Matti Lassas, Teemu Saksala, Hanming Zhou. Reconstruction of a compact manifold from the scattering data of internal sources. Inverse Problems & Imaging, 2018, 12 (4) : 993-1031. doi: 10.3934/ipi.2018042
Juan C. Moreno, V. B. Surya Prasath, João C. Neves. Color image processing by vectorial total variation with gradient channels coupling. Inverse Problems & Imaging, 2016, 10 (2) : 461-497. doi: 10.3934/ipi.2016008
Daniil Kazantsev, William M. Thompson, William R. B. Lionheart, Geert Van Eyndhoven, Anders P. Kaestner, Katherine J. Dobson, Philip J. Withers, Peter D. Lee. 4D-CT reconstruction with unified spatial-temporal patch-based regularization. Inverse Problems & Imaging, 2015, 9 (2) : 447-467. doi: 10.3934/ipi.2015.9.447
Chengxiang Wang, Li Zeng. Error bounds and stability in the $l_{0}$ regularized for CT reconstruction from small projections. Inverse Problems & Imaging, 2016, 10 (3) : 829-853. doi: 10.3934/ipi.2016023
Ming Yan, Alex A. T. Bui, Jason Cong, Luminita A. Vese. General convergent expectation maximization (EM)-type algorithms for image reconstruction. Inverse Problems & Imaging, 2013, 7 (3) : 1007-1029. doi: 10.3934/ipi.2013.7.1007
Feishe Chen, Lixin Shen, Yuesheng Xu, Xueying Zeng. The Moreau envelope approach for the L1/TV image denoising model. Inverse Problems & Imaging, 2014, 8 (1) : 53-77. doi: 10.3934/ipi.2014.8.53
Shi Yan, Jun Liu, Haiyang Huang, Xue-Cheng Tai. A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Problems & Imaging, 2019, 13 (3) : 653-677. doi: 10.3934/ipi.2019030
Larisa Beilina, Michel Cristofol, Kati Niinimäki. Optimization approach for the simultaneous reconstruction of the dielectric permittivity and magnetic permeability functions from limited observations. Inverse Problems & Imaging, 2015, 9 (1) : 1-25. doi: 10.3934/ipi.2015.9.1
Chengxiang Wang, Li Zeng, Yumeng Guo, Lingli Zhang. Wavelet tight frame and prior image-based image reconstruction from limited-angle projection data. Inverse Problems & Imaging, 2017, 11 (6) : 917-948. doi: 10.3934/ipi.2017043
Konstantinos Papafitsoros, Kristian Bredies. A study of the one dimensional total generalised variation regularisation problem. Inverse Problems & Imaging, 2015, 9 (2) : 511-550. doi: 10.3934/ipi.2015.9.511
Xavier Bresson, Tony F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Problems & Imaging, 2008, 2 (4) : 455-484. doi: 10.3934/ipi.2008.2.455
Sören Bartels, Marijo Milicevic. Iterative finite element solution of a constrained total variation regularized model problem. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1207-1232. doi: 10.3934/dcdss.2017066
Liyan Ma, Lionel Moisan, Jian Yu, Tieyong Zeng. A stable method solving the total variation dictionary model with $L^\infty$ constraints. Inverse Problems & Imaging, 2014, 8 (2) : 507-535. doi: 10.3934/ipi.2014.8.507
Wenxiang Cong Ge Wang Qingsong Yang Jia Li Jiang Hsieh Rongjie Lai | CommonCrawl |
Harsh Kumar
167 helpful flags
Find the sum of first $n$ terms of the series: $\frac{1}{1\times2}+\frac{1}{2\times3}+\frac{1}{3\times4}+\cdots +\frac{1}{n\times(n+1)}$
Why is the solution to $x-\sqrt 4=0$ not $x=\pm 2$?
Dec 29 '16 at 9:44
What must be the simplest proof of the sum of first $n$ natural numbers?
Show that $e^n>\frac{(n+1)^n}{n!}$ without using induction.
Dec 20 '16 at 17:17
Find sum of numbers from $1-100$ which are not divisible by $3$ and$ 7$
Determine where a point lies in relation to a circle, is my answer right?
The sum of digits in a 2-digit number
Jan 4 '17 at 3:28
Solve the equation $5x^2+5y^2-8xy-2x-4y+5=0$
Find the numbers in A.P. those sum is $24$ and product is $440$
last digit is a square and…
Dec 6 '16 at 14:59
Prove that the coefficient of $n$ is the common difference.
Jan 16 '17 at 3:27
The sum of three primes is 100, one of them exceeds the other by 36, find the largest one
What is wrong with this?
Proving area of triangle formed at parallelogram midpoint is 1/4 of the parallelogram?
How do I solve quadratic equations when the coefficients are complex and real?
Tangent (kissing) circles on boundary of a larger circle
Apr 6 '17 at 10:35
If the roots of $9x^2-2x+7=0$ are $2$ more than the roots of $ax^2+bx+c=0$, then value of $4a-2b+c=$?
Mar 30 '17 at 9:15
Find the equation of a circle tangent to a circle and x-axis, with center on a certain line.
Mar 5 '17 at 5:18
find $n$ for which $10^n+3(4^{n+2})+5$ is a prime number
The straight line $y=tx-2$ is a tangent to the graph of a curve $y=2x^2 +4x$, find the value of $t$ ($t>0$)
The sides of a parallelogram are $6$ and $8,$ one of the diagonals is $12,$ find the other one
prove that all lines $ax+y=b$ such that coefficients $a, 1, b$ constitute arithmetic sequence have one common point
Finding the unknown area.
Find the sum of a sequence
May 14 '17 at 11:44
Algebraic Proof: Sum of Squares of 3 consecutive odd numbers = 12n+11
Apr 24 '17 at 16:18
Every complex number has 2 square roots - Rudin
Find the value of $3+7+12+18+25+\ldots=$
Apr 13 '17 at 7:43
Subsitution $f(x)=y$
If a number does not divide another number does that mean their gcd is 1?
Mar 19 '17 at 13:06 | CommonCrawl |
Elliptic curves with $2$-torsion contained in the $3$-torsion field
Authors: Julio Brau and Nathan Jones
Journal: Proc. Amer. Math. Soc. 144 (2016), 925-936
MSC (2010): Primary 11G05
DOI: https://doi.org/10.1090/proc/12786
Published electronically: July 8, 2015
Abstract: There is a modular curve $X'(6)$ of level $6$ defined over $\mathbb {Q}$ whose $\mathbb {Q}$-rational points correspond to $j$-invariants of elliptic curves $E$ over $\mathbb {Q}$ that satisfy $\mathbb {Q}(E[2]) \subseteq \mathbb {Q}(E[3])$. In this note we characterize the $j$-invariants of elliptic curves with this property by exhibiting an explicit model of $X'(6)$. Our motivation is two-fold: on the one hand, $X'(6)$ belongs to the list of modular curves which parametrize non-Serre curves (and is not well known), and on the other hand, $X'(6)(\mathbb {Q})$ gives an infinite family of examples of elliptic curves with non-abelian "entanglement fields�, which is relevant to the systematic study of correction factors of various conjectural constants for elliptic curves over $\mathbb {Q}$.
J. Brau, Selmer groups of elliptic curves and Galois representations, Ph.D. Thesis, University of Cambridge (2014).
Alina-Carmen Cojocaru, David Grant, and Nathan Jones, One-parameter families of elliptic curves over $\Bbb Q$ with maximal Galois representations, Proc. Lond. Math. Soc. (3) 103 (2011), no. 4, 654–675. MR 2837018, DOI 10.1112/plms/pdr001
P. Deligne and M. Rapoport, Les schémas de modules de courbes elliptiques, Modular functions of one variable, II (Proc. Internat. Summer School, Univ. Antwerp, Antwerp, 1972) Springer, Berlin, 1973, pp. 143–316. Lecture Notes in Math., Vol. 349 (French). MR 0337993
Tim Dokchitser and Vladimir Dokchitser, Surjectivity of mod $2^n$ representations of elliptic curves, Math. Z. 272 (2012), no. 3-4, 961–964. MR 2995149, DOI 10.1007/s00209-011-0967-7
Noam D. Elkies, Points of low height on elliptic curves and surfaces. I. Elliptic surfaces over $\Bbb P^1$ with small $d$, Algorithmic number theory, Lecture Notes in Comput. Sci., vol. 4076, Springer, Berlin, 2006, pp. 287–301. MR 2282931, DOI 10.1007/11792086_{2}1
Nathan Jones, Almost all elliptic curves are Serre curves, Trans. Amer. Math. Soc. 362 (2010), no. 3, 1547–1570. MR 2563740, DOI 10.1090/S0002-9947-09-04804-1
N. Jones, $\operatorname {GL}_2$-representations with maximal image, Math. Res. Lett., to appear.
S. Lang and H. Trotter, Frobenius distribution in $\operatorname {GL}_2$ extensions, Lecture Notes in Math. 504, Springer (1976).
V. Radhakrishnan, Asymptotic formula for the number of non-Serre curves in a two-parameter family, Ph.D. Thesis, University of Colorado at Boulder (2008).
Kenneth A. Ribet, Galois action on division points of Abelian varieties with real multiplications, Amer. J. Math. 98 (1976), no. 3, 751–804. MR 457455, DOI 10.2307/2373815
Jean-Pierre Serre, Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), no. 4, 259–331 (French). MR 387283, DOI 10.1007/BF01405086
Jean-Pierre Serre, Cours d'arithmétique, Le Mathématicien, No. 2, Presses Universitaires de France, Paris, 1977 (French). Deuxième édition revue et corrigée. MR 0498338
David Zywina, Elliptic curves with maximal Galois action on their torsion points, Bull. Lond. Math. Soc. 42 (2010), no. 5, 811–826. MR 2721742, DOI 10.1112/blms/bdq039
Alina-Carmen Cojocaru, David Grant, and Nathan Jones, One-parameter families of elliptic curves over $\mathbb {Q}$ with maximal Galois representations, Proc. Lond. Math. Soc. (3) 103 (2011), no. 4, 654–675. MR 2837018, DOI 10.1112/plms/pdr001
P. Deligne and M. Rapoport, Les schémas de modules de courbes elliptiques, Modular functions of one variable, II (Proc. Internat. Summer School, Univ. Antwerp, Antwerp, 1972) Springer, Berlin, 1973, pp. 143–316. Lecture Notes in Math., Vol. 349 (French). MR 0337993 (49 \#2762)
Tim Dokchitser and Vladimir Dokchitser, Surjectivity of mod $2^n$ representations of elliptic curves, Math. Z. 272 (2012), no. 3-4, 961–964. MR 2995149, DOI 10.1007/s00209-011-0967-7
Noam D. Elkies, Points of low height on elliptic curves and surfaces. I. Elliptic surfaces over $\mathbb {P}^1$ with small $d$, Algorithmic number theory, Lecture Notes in Comput. Sci., vol. 4076, Springer, Berlin, 2006, pp. 287–301. MR 2282931 (2008e:11082), DOI 10.1007/11792086_21
Nathan Jones, Almost all elliptic curves are Serre curves, Trans. Amer. Math. Soc. 362 (2010), no. 3, 1547–1570. MR 2563740 (2011d:11130), DOI 10.1090/S0002-9947-09-04804-1
Kenneth A. Ribet, Galois action on division points of Abelian varieties with real multiplications, Amer. J. Math. 98 (1976), no. 3, 751–804. MR 0457455 (56 \#15660)
Jean-Pierre Serre, Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), no. 4, 259–331 (French). MR 0387283 (52 \#8126)
Jean-Pierre Serre, Cours d'arithmétique, Presses Universitaires de France, Paris, 1977 (French). Deuxième édition revue et corrigée; Le Mathématicien, No. 2. MR 0498338 (58 \#16473)
David Zywina, Elliptic curves with maximal Galois action on their torsion points, Bull. Lond. Math. Soc. 42 (2010), no. 5, 811–826. MR 2721742 (2012a:11073), DOI 10.1112/blms/bdq039
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 11G05
Retrieve articles in all journals with MSC (2010): 11G05
Julio Brau
Affiliation: Faculty of Mathematics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, United Kingdom
Email: [email protected]
Affiliation: Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, 322 Science and Engineering Offices (M/C 249), 851 S. Morgan Street, Chicago, Illinois 60607-7045
Email: [email protected]
Received by editor(s): June 8, 2014
Received by editor(s) in revised form: February 4, 2015
Communicated by: Romyar T. Sharifi | CommonCrawl |
Characterizing dynamic behaviors of three-particle paramagnetic microswimmer near a solid surface
Qianqian Wang1,
Lidong Yang1,
Jiangfan Yu1 &
Li Zhang1,2,3
Robotics and Biomimetics volume 4, Article number: 20 (2017) Cite this article
Particle-based magnetically actuated microswimmers have the potential to act as microrobotic tools for biomedical applications. In this paper, we report the dynamic behaviors of a three-particle paramagnetic microswimmer. Actuated by a rotating magnetic field with different frequencies, the microswimmer exhibits simple rotation and propulsion. When the input frequency is below 8 Hz, it exhibits simple rotation on the substrate, whereas it shows propulsion with varied poses when subjected to a frequency between 8 and 15 Hz. Furthermore, a solid surface that enhances swimming velocity was observed as the microswimmer is actuated near a solid surface. Our simulation results testify that the surface-enhanced swimming near a solid surface is because of the induced pressure difference in the surrounding fluid of the microagent.
Microswimmers remotely actuated by magnetic fields have been considered as promising microrobotic tools because of their great potential in biomedical applications [1], such as targeted therapy [2], drug delivery [3, 4] and minimally invasive surgery [5]. Various designs of microswimmers combined with diverse magnetic actuation strategies have been proposed [6,7,8,9,10]. Among them, inspired by E. coil bacterial, helical microswimmer has drawn attention of many researchers. For propulsion of helical microswimmers, rotating magnetic fields are widely used for the generation of corkscrew motion at low Reynolds number. It was reported that, actuated by a rotating magnetic field, the "artificial bacterial flagella" (ABF) perform versatile swimming behaviors and can act as effective tools for cargo transport and micromanipulation tasks [11,12,13,14,15]. These ABF were fabricated using self-scrolling technique [11], 3-D direct laser writing [14], glancing angle deposition technique [16, 17], DNA-based flagellar bundles [18], and so on. The dynamics of such helical swimmers have been studied systematically below, near and higher than the step-out frequency. For instance, to perform corkscrew motion with continuous rotation, usually a magnetic helical swimmer should be actuated with an input frequency that is below its step-out frequency, whereas the actuation with a frequency that is higher than the step-out frequency will lead to a so-called "jerky motion" [19, 20], i.e., the combination of a rotation with stops and backward motions [21], which results in a decrease in its translational velocity [12, 22,23,24]. Interestingly, Ghosh et al. [24] reported that a helical microswimmer could exhibit bistable behaviors under an external field near the step-out frequency, showing random switch between two configurations, i.e., propulsion or tumbling motion.
Unlike the propulsion of tiny structures with chirality in low Reynolds number regime, it has been demonstrated recently that randomly shaped microswimmers can also be actuated effectively using a rotating magnetic field [25, 26]. These microswimmers are obtained using iron oxide nanoparticle aggregations with varied shapes based on hydrothermal carbonization. Alternatively, Cheang et al. [27] reported that achiral three-particle microswimmers exhibit controlled swimming motion under a rotating magnetic field. These microswimmers consist of three polystyrene microparticles embedded with paramagnetic or ferromagnetic nanoparticles, and varied swimming behaviors are triggered because of their different magnetic properties, despite the geometrical similarity.
It is notable that recent studies of three-particle microswimmers focus on swimming behaviors in fluid with negligible boundary effects [27,28,29]; however, their swimming behaviors near a solid surface can be significantly affected due to the boundary effect. Previously, the boundary effects were reported on both natural swimming organisms and artificial swimmers. The influence of solid boundaries has been observed and analyzed for E. coil bacteria [30, 31], and spermatozoa self-organized into dynamic vortices resembling quantized rotating waves on a planar surface [32]. A solid surface affects swimming direction of ABF, resulting in drifting behaviors [14], and wobbling motion of the ABF enhances the sidewise drift due to wall effects [33]. Simulation results indicate that microswimmer exhibits enhanced mobility when swimming between inclined rigid boundaries [34], and a surface can deform the induced streamlines of a rotating microagent [35].
Here, we report the dynamic behaviors of a paramagnetic three-particle microswimmer, which is actuated near a solid surface using a rotating magnetic field. With rotation axis of the magnetic field perpendicular to the horizontal surface, the microswimmer exhibits simple rotation when the input frequency is below 8 Hz, whereas it shows propulsion when subjected to a frequency between 8 and 15 Hz (Fig. 1). Furthermore, enhanced swimming velocity can be achieved if the microswimmer exhibits propulsion near the surface, because of the induced pressure difference in the surrounding fluid of the microswimmer. While with the rotation axis of the field parallel to the surface, the microswimmer exhibits low-frequency tumbling (1–3 Hz) and wobbling (3–15 Hz). The main contributions of this work include the following two aspects. First, a mathematical model is proposed for the analysis of dynamic poses under different input frequencies. Second, simulation results show that the induced pressure near a surface can enhance swimming velocity of a three-particle microswimmer, which are validated by experimental results.
The remaining parts of this paper are structured as follows. Mathematical modeling and simulations of the microswimmer are presented in Methods section. Then, in section Results and discussion, we discuss the dynamic behaviors of this microswimmer, and the experimental results are analyzed as well. Finally, Conclusions are given in the last section.
The three-particle microswimmer is treated as a rigid structure with two perpendicular planes of symmetry, forms an achiral structure (Fig. 2a). It is placed on a solid surface and actuated by a rotating magnetic field (Fig. 2b).
Motion at low Reynolds numbers
The hydrodynamics of the microswimmer in low Reynolds number regime can be described by the Stokes equations:
$$\eta \nabla ^2{\mathbf {u}}-\nabla p=0$$
$$\begin{aligned} \nabla \cdot u=0 \end{aligned}$$
where p is pressure and u is moving velocity of the fluid. The relationship of external force \({\varvec{F}}\) together with torque \(\varvec{\tau }\) and translational velocity \(\varvec{V}\) together with angular velocity \(\varvec{\omega }\) is described as [36]:
$$\begin{aligned} \left[ \begin{array}{lll} \varvec{V} \\ \varvec{\omega } \end{array} \right] = \left[ \begin{array}{ccc} \varvec{K} &{} \varvec{C_o}\\ \varvec{C_o^T} &{}\varvec{\Omega _o} \end{array} \right] \left[ \begin{array}{ccc} \varvec{F} \\ \varvec{\tau } \end{array} \right] \end{aligned}$$
where \(\varvec{K}\) is the translation tensor and \(\varvec{\Omega _o}\) is the rotation tensor. \(\varvec{C_o}\) is the coupling tensor, representing coupling of translational and rotational motions of a microagent. For the microagent in Fig. 2a, the matrices \(\varvec{K}\), \(\varvec{\Omega _o}\) and \(\varvec{C_o}\) are given by
$$\begin{aligned} {\mathbf {K}} = \left[ \begin{array}{lll} K_1 &{} 0 &{} 0 \\ 0 &{} K_2 &{} 0 \\ 0 &{} 0 &{} K_3 \\ \end{array} \right] ,\quad \mathbf {\Omega _o} = \left[ \begin{array}{ccc} \Omega _1 &{} 0 &{} 0 \\ 0 &{} \Omega _2 &{} 0 \\ 0 &{} 0 &{} \Omega _3 \\ \end{array} \right] ,\quad \mathbf {C_o} = \left[ \begin{array}{ccc} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} C_{23} \\ 0 &{} C_{32} &{} 0 \\ \end{array} \right] \end{aligned}$$
Magnetic actuation
The magnetic force and torque exerted on the microswimmer are given by:
$$\begin{aligned} \varvec{F}=(\varvec{m}\cdot \nabla )\varvec{B} \end{aligned}$$
$$\begin{aligned} \varvec{\tau }=\varvec{m} \times \varvec{B} \end{aligned}$$
where \(\varvec{m}\) is the induced magnetic dipole moment and \(\varvec{B}\) is the flux density of the magnetic field. Here we have \(\varvec{F}=0\) because the applied magnetic field has uniform flux density. The angular and translational velocity of the microswimmer due to induced magnetic torque is denoted by
$$\begin{aligned} \varvec{\omega }=\varvec{\Omega _o}(\varvec{m}\times \varvec{B}) \end{aligned}$$
$$\begin{aligned} \varvec{V}=\varvec{C_o}(\varvec{m}\times \varvec{B}) \end{aligned}$$
Two equations above indicate that if the coupling tensor \(\varvec{C_o}\) is nonzero, a rotating microswimmer can exhibit translational velocity.
Next, we show the two torques (i.e., drag torque and magnetic torque) counterbalanced with each other. When the pitch angle \(\alpha\) is 0, \(\varvec{m}\) and \(\varvec{B}\) are both in a plane perpendicular to the Z-axis (Fig. 3), making the angular velocity \(\varvec{\omega } = [0\quad 0 \quad \omega _z]^T\) and magnetic torque \(\varvec{\tau _m} = [0\quad 0 \quad \tau _{mz}]^T\). The induced magnetic torque can be treated as the torque exerted on a chain that consists of three spherical particles [37], as expressed by
$$\begin{aligned} \tau _{mz}=\frac{3}{4}\pi a^3 \mu _0 \chi ^2 B^2 \sin (2\theta ) \end{aligned}$$
where a is radius of the particle, \(\mu _0\) is the vacuum permeability, \(\chi\) is the particle susceptibility and \(\theta\) is the phase lag between external field and induced dipole moment. If the microswimmer is actuated with steady rotation, the phase lag must satisfy the condition \(\sin (2\theta )<1\) [37]. The drag torque \(\varvec{\tau _r}\) due to hydrodynamic interaction can be obtained by combining torque from each particle individually [38]. Similarly, we have \(\varvec{\tau _r} = [0\quad 0 \quad \tau _{rz}]^T\). For each particle, the drag torque is given by
$$\begin{aligned} \varvec{\tau _{rz,i}}=\varvec{d_i}\times \varvec{F_{d,i}} \end{aligned}$$
$$\begin{aligned} \varvec{F_{d,i}}=D_d\varvec{V_i}=D_d(\varvec{\omega _z}\times \varvec{d_i}) \end{aligned}$$
where \(\varvec{d_i}\) and \(\varvec{F_{d,i}}\) are vector position and drag force of the \(i-th\) microparticle, and \(D_d\) is the drag force coefficient, respectively. For spherical microparticles without any effects from boundary \(D_d=-6\pi \eta a\). The total drag torque is given by
$$\begin{aligned} \varvec{\tau _{rz}}=\sum _{n=1}^{3}\varvec{d_i}\times \varvec{F_{d,i}} = D_d \varvec{\omega _z}\sum _{n=1}^{3} {d_i^2} \end{aligned}$$
Eq.12 shows that larger \(\sum _{i=1}^{3}d^2_i\) leads to larger drag torque under the same input frequency.
Pose change frequency
The microswimmer undergoes constant magnetic torque due to uniform magnetic flux density. However, the drag torque is in dependence on the input frequency of magnetic field and rotation pose of the microswimmer. Next, from the torque-balance perspective, we show how swimming behaviors of our microswimmer varied by increasing the input frequency. The phase lag for a given input frequency and magnetic flux density is [37]
$$\begin{aligned} \sin (2\theta )=\frac{96\eta \omega }{\mu _0\chi ^2 B^2\ln (\frac{3}{2})} \end{aligned}$$
In order to balance the two torques, term \(\sum _{i=1}^{3}d^2_i\) in Eq. 12 must change its value corresponding to different input frequencies, which results in different rotation poses of the microswimmer. However, the adjustable range of this term has limitation. Let us consider two cases of the microswimmer under actuation, i.e., simple rotation and propulsion. We simplify the microswimmer as an isosceles triangle with two sides of identical length L and the included angle \(\gamma\). Since we only consider microswimmer with two perpendicular planes of symmetry, \(\gamma\) in our analysis is set to be \(\pi /3<\gamma <\pi\).
First, we assume that the microswimmer is actuated with simple rotation as shown in Fig. 4a. From the top view, the rotation axis is a dot with coordinate \((x_r,y_r)\). From geometrical perspective, we have
$$\begin{aligned} \sum _{n=1}^{3} {d_i^2}=(x_i-x_r)^2 + (y_i-y_r)^2 \end{aligned}$$
where \((x_i,y_i)\) is the coordinate of the \(i-th\) particle's center. The minimal value of \(\sum _{i=1}^{3}d^2_i\) exists if the rotation axis passes through the centroid of the simplified isosceles triangle, given by
$$\begin{aligned} (x_c,y_c)=\left( \frac{\sum _{n=1}^{3}x_i}{3},\frac{\sum _{n=1}^{3}y_i}{3}\right) \end{aligned}$$
Substituting Eq. 15 into Eq. 14 yields
$$\sum _{n=1}^{3}d^2_{i{\rm min}} = \frac{2}{3}L^2 (2+\sin \gamma -\cos \gamma)$$
$$\begin{aligned}&\quad \sum _{n=1}^{3}d^2_{i{\rm min}}\in (1.58 L^2, 2.28 L^2) \end{aligned}$$
Then, we assume the microswimmer is actuated with propulsion as shown in Fig. 4b. In this scenario, the minimal value exists if the rotation axis is parallel to the longest side of the triangle, which is the side respected to angle \(\gamma\) since \(\gamma >\pi /3\). Calculation results show that the minimal value exists when the rotation axis passes through the point p, a point of trisection of the height with respect to the longest side. Similarly, we have
$$\sum _{n=1}^{3}d^2_{i{\rm min}} = \frac{2}{3} L^2 \cos ^2\frac{\gamma }{2}$$
$$\begin{aligned}&\quad \sum _{n=1}^{3}d^2_{i{\rm min}} \in \left( 0,\frac{1}{2}L^2\right) \end{aligned}$$
The analysis results above, in particular Eqs. 17 and 19, show that with the same input frequency, drag torque becomes smaller if the microswimmer exhibits propulsion rather than simple rotation. Finally, let us consider a specific case. We increase the input frequency continuously, at first, the microswimmer exhibits simple rotation, and then, it tends to change its actuation behaviors toward reducing the drag torque. The only feasible method is to reduce the distance (term \(\sum _{i=1}^{3}d^2_i\) in Eq. 12) between each microparticle and the rotation axis. Therefore, the microswimmer has to change from simple rotation to propulsion when the input frequency is higher than a certain value \(\omega _{c}\), and here we name \(\omega _{c}\) as the pose-change frequency. When the angular velocity is high enough, the propulsion force of the microswimmer is larger than the combination of gravitational force and buoyancy, so that it will swim. A switch from simple rotation to propulsion can be realized by increasing the input frequencies with a value higher than \(\omega _{c}\). For example in Fig. 1, \(\omega _1\) is below \(\omega _{c}\) while \(\omega _2\), \(\omega _3\), \(\omega _4\) are higher than \(\omega _{c}\).
Besides the two specific scenarios shown in Fig. 4a, b, other dynamic behaviors can be realized as well. As shown in Fig. 4c, we define the simplified triangular has an angle \(\varphi\) with X-axis and the distance between the vertex and rotation axis is \(d_m\). These two parameters are able to represent the propulsion pose with rotation axis. Here the \(\sum _{i=1}^{3}d^2_i\) is calculated as
$$\begin{aligned} \sum _{i=1}^{3}d^2_i = 3d_m^2 + L^2[\cos ^2(\varphi )+\cos ^2(\varphi -\gamma )]-2Ld_m[\cos (\varphi )+\cos (\varphi -\gamma )] \end{aligned}$$
For a given microswimmer \(\gamma =\pi /2\), if we define \(d_m=\sigma L\) (\(0 \le \sigma \le 1\)) and Eq. 20 can be simplified as
$$\begin{aligned} \sum _{i=1}^{3}d^2_i = L^2[3\sigma ^2 + 1 -2\sigma (\sin \varphi + \cos \varphi )] \end{aligned}$$
The range of \(\varphi\) is set to be \(0\le \varphi \le \pi /4\), while \(\pi /4 \le \varphi \le \pi /2\) results in the same value of \(\sum _{i=1}^{3}d^2_i\) because of the symmetry of the model. We use MATLAB to calculate the distribution of the value in Eq. 21, and the results are shown in Fig. 5. The maximum value exists when \(\sigma = 1\) and \(\varphi =0\), corresponding to the pose with angular velocity \(\omega _2\) in Fig. 1. Interestingly, the minimum value is \(\sum _{i=1}^{3}d^2_i=L^2 /3\) when \(\sigma = 0.47\) and \(\varphi = \pi /4\), which also proves that minimal value exists when the rotation axis is parallel to the longest side of the simplified triangle model.
To simulate and understand how a solid surface affects swimming behaviors, two finite element method (FEM) models are established using COMSOL Multiphysics (two insets in Fig. 6a, d) to investigate the induced fluid flows (Fig. 6a, b) and pressure (Fig. 6b, c, e, f) by the rotating microswimmer. The microswimmer is modeled as three spheres with a diameter of 4.5 \(\upmu \hbox {m}\) and an angle \(\gamma = \pi /2\), and set to be actuated in water at a frequency of 10 Hz. A solid surface is modeled as a no-slip wall at the bottom. Simulations consist of two cases: Fig. 6a–c are simulation results with microswimmer near (0.75 \(\upmu \hbox {m}\)) the surface, and Fig. 6d–f are results with it farther away (20.75 \(\upmu \hbox {m}\)) from the surface. After over ten full rotations, the induced pressure distribution and streamlines of the surrounding fluid are calculated. Figure 6a, b show that the microswimmer induces a net flow of fluid along the direction of the rotation axis, similar to the propulsion of a helical flagellum [15, 39]. The fluid impacts on the substrate, resulting in enhanced pressure [40]. For the case of rotation near the surface, the induced pressure difference between the area near the top and bottom space of the microswimmer is observed in Fig. 6b, c. However, such difference becomes negligible when the microswimmer 20.75 \(\upmu \hbox {m}\) above the surface (Fig. 6e, f). The largest pressure difference around each particle is in the order of \(10^{-2}\) Pa (Fig. 7). The affected area on the microswimmer is in the order of \(10^1\) \(\upmu \hbox {m}^2\) and the net force along Z-axis works out in piconewton range due to the pressure difference.
The microswimmer and experimental setup
In our experiments, the microswimmer was obtained by direct sediment of paramagnetic microparticles colloidal suspensions (Spherotech PMS-40-10) in DI water. These microparticles have a density of 1.27 \(\hbox {g}/\hbox {cm}^3\) and a diameter of 4–5 \(\upmu \hbox {m}\) with a smooth surface. Sediment introduces randomness to the process, resulting in different structures. Nonetheless, the three-particle structures can be easily obtained and directly used in our experiments. During the magnetic actuation, we did not observe deformation of the swimmer by turning on and off the field, which indicates the link between two microparticles is fixed and stable.
Our electromagnetic coils setup consists of three orthogonally placed Helmholtz coil pairs, a swimming tank containing a Si substrate inside and a light microscope with a recording camera on the top. Rotating magnetic field is generated by the coil system (Fig. 8) actuated by three servo amplifiers (ADS 50/5 4-Q-DC, Maxon Inc.). The amplifiers are controlled by a LabVIEW program through an Analog and Digital I/O card (Model 826, Sensoray Inc.), frequency, field strength as well as yaw (\(\beta\)) and pitch angles (\(\alpha\)) can be adjusted through this program. Schematic of the magnetic field is shown in Fig. 2b. A swimming tank (\(21 \hbox {mm} \times 21 \hbox {mm} \times 3 \hbox {mm}\)) filled with DI water is placed in the middle of the coils, and the Si substrate inside provides a solid surface. The top camera records the motion of the microswimmer at a rate of 50 fps.
The microrobot swims away from the solid surface (\(\alpha =0^{\circ }\))
The microswimmer has been actuated at a frequency range from 1 to 16 Hz on a Si substrate in the tank. The flux density of the magnetic field maintains 9 mT during the experiments. When the input frequency is below 8 Hz, the microswimmer exhibits simple rotation and no translational velocity is observed (Fig. 9a), whereas it exhibits propulsion with varied poses when the input frequency is higher than 8 Hz (Fig. 9b). Experiment results show that 8 Hz is the pose-change frequency \(\omega _{c}\). When the input frequency is below \(\omega _{c}\) (8 Hz), the microswimmer exhibits simple rotation and the drag torque is small enough to be balanced by the magnetic torque. The projection of rotation axis in XY-plane is closing to the centroid of the simplified triangle gradually with increasing the input frequency, in order to reduce the drag torque (Fig. 4a). Such adjustment of rotation axis cannot affect the actuation pose (simple rotation) of the microswimmer. Equations 14–17 have shown the limitation of this adjustment method, which also explains why the microswimmer cannot maintain simple rotation with input frequency higher than \(\omega _{c}\). While when the input frequency is higher than \(\omega _{c}\) (8 Hz), the drag torque is affected by both the pose angle \(\varphi\) and the distance \(d_m\) (Fig. 4c). Different input frequencies of the magnetic field change the drag torque, and dynamic behaviors of the microswimmer are governed by the interplay of magnetic and resistant torques. The dynamic behaviors appear when turning on the magnetic field or changing the input frequency (see Additional file 1). As shown in the experimental results (Fig. 9b), after turning on the magnetic field the dynamic behaviors of the microswimmer last less than 2 s (0–2 s). After that the microswimmer exhibits steady rotation and propulsion (2–29 s).
Swimming velocity of the microswimmer along the Z-axis is measured by a fixed distance \(\Delta z\) divided by time \(\Delta t\). It follows three steps. First, the focal plane of the microscope is set on the substrate, followed by recording and turning the magnetic field on. This step aims to record the starting time. Then, the focal plane is adjusted to 20 \(\upmu \hbox {m}\) above the substrate. The microswimmer swimming across the focal plane is observed as it became in focus gradually and then out of focus. Finally, we find the best focusing frame from the recorded video to count the time \(\Delta t\). Using this method, swimming velocity of the microswimmer in the space 0 to \(20 \,\upmu \hbox {m}\) above the substrate (bottom space) is measured. The velocity in the space 20–40 \(\upmu \hbox {m}\) (upper space) above the substrate is measured using the same method. After turning off the magnetic field, the microswimmer will gradually sink onto the substrate due to gravitational force. The swimming velocity against frequency in the bottom and upper space is depicted in Fig. 10.
Next, we show magnetic steering of the microswimmer. It swims along the direction of +Z-axis after exerting field at a frequency of 10 Hz with \(\alpha =0^{\circ }\). After lifting \(25 \,\upmu \hbox {m}\) from the substrate, it can stay in focus by adjusting pitch angle \(\gamma\) to \(80^{\circ }\), showing negligible displacement along Z-direction (see Additiona file 1). The propulsive force has the same direction with the normal line of the applied magnetic field. In this scenario, gravitational force and buoyancy are balanced by the component of propulsive force on Z-axis. Steering can be done by adjusting the yaw angle \(\beta\) of the field from 0\(^{\circ }\) to 360\(^{\circ }\) as shown in Fig. 11. The microswimmer did not show visible sidewise drift because of the absence of the boundary effect [33]. The propulsive force is measured based on the equilibrium of forces, which contains gravitational force, buoyancy and propulsive force. The gravitational force and buoyancy, respectively, are 1.82 and 1.43 pN, and the propulsive force is calculated to be 1.14 pN. Based on the simulation results, the net force generated by pressure difference is in the order of piconewton as well, which implies this net force can enhance swimming velocity. Figure 10 shows that the microswimmer has a higher swimming velocity in the bottom space (0–20 \(\upmu \hbox {m}\) above the surface), which validates our calculation and simulation results.
Actuation of the microrobot near the solid surface (\(\alpha =90^{\circ }\))
The microswimmer has been actuated with pitch angle \(\alpha =90^{\circ }\) (i.e. its rotation axis is parallel to the horizontal substrate) above the Si substrate. It shows frequency-dependent motion regimes, that is, tumbling (1–3 Hz) and wobbling (3–20 Hz). The plot of velocity verses frequency is depicted in Fig. 12. When input frequency is below 3 Hz, the microswimmer exhibits tumbling motion with 90\(^{\circ }\) precession angle. After increasing the input frequency to be higher than 3 Hz, the microswimmer exhibits wobbling motion and the precession angle decreases continually under higher frequency. Previous studies show that dynamic regimes for the tumbling-to-wobbling transition of a magnetic microswimmer depend not only on the frequency of the magnetic field, but also on the geometry and easy axis orientation of the microswimmer [41]. In our experiments, when the microswimmer is actuated at a frequency range of 1–3 Hz, both the easy axis and the induced magnetic moment are oriented along the field direction. The easy axis and the induced magnetic moment rotate with a phase lag behind the magnetic field, resulting in tumbling motion of the microswimmer. After increasing the input frequency to be higher than 3 Hz, the drag torque increases and its interplay with the magnetic torque results in wobbling regime. During the experiments, the precession angle of the microswimmer decreases with increasing input frequency of the magnetic field, same as the theoretical prediction [41]. The swimming velocity reaches maximum under magnetic field at a frequency of 10 Hz, similar to the results shown in Fig. 10.
Drifting of the microswimmer occurs due to the boundary effects. The drag coefficient is constant for a certain sphere particle in bulk fluid, but the presence of a solid surface increases the drag on a body, which decreases with a growing distance between the microswimmer and the surface [42]. To be specific, a segment of the microswimmer closer to the surface exhibits larger drag than that farther away the surface, which causes the microswimmer drift sidewise, perpendicular to the rotation axis. Figure 12 also indicates that unlike the ABF in [33], the drift velocity of the microswimmer is not increasing linearly with the input frequency.
In this paper, we demonstrate dynamic behaviors of a three-particle paramagnetic microswimmer near a solid surface. These dynamic behaviors are dependent on the input frequency of the rotating magnetic field, and varied actuation poses can be switched by adjusting the frequency. Simulations of the microswimmer near (\(0.75\,\upmu \hbox {m}\)) and far farther away (\(20.75 \,\upmu \hbox {m}\)) from a solid surface are investigated, which are in good agreement with the experimental results. Finally, the effects of a solid surface on swimming behaviors are proposed, i.e., enhancing swimming velocity when the microswimmer exhibits propulsion perpendicular to the horizontal surface and causing sidewise drift when it is actuated parallel to the surface. Future studies will focus on the motion control of the microswimmer in biofluids with different viscosities.
Schematic of the dynamic behaviors of a three-particle microswimmer under a rotating magnetic field. Black dashed line and arrows refer to the rotation axis and direction with angular velocity \(\omega _1<\omega _2<\omega _3 <\omega _4\), blue arrows refer to velocity with \(v _1>v_2\). The microswimmer exhibits simple rotation (\(\omega _1\)) and propulsion under different input frequencies (\(\omega _2\), \(\omega _3\), \(\omega _4\)). Gray rectangle refers to a solid surface
Structure of the microswimmer and the applied magnetic field. a Microswimmer is treated as a rigid structure with two mutually perpendicular planes of symmetry. b Schematic of the rotating magnetic field with constant flux density. Blue dashed line and arrow refer to the normal line and rotation direction of the magnetic field, respectively. Pitch angle \(\alpha\) is between the normal line and Z-axis, and yaw angle \(\beta\) is between X-axis and projection of the normal line in the XY-plane
Schematic of the microswimmer actuated with simple rotation. Black dot and arrow refer to the rotation axis and direction of the microswimmer with angular velocity \(\omega\), respectively. Three blue arrows \(d_i\) are the vector positions of the three particles, and red arrows \(F_{d,i}\) are the drag forces exerted on each particle (i = 1, 2, 3)
a Schematic of the microswimmer actuated with simple rotation. Rotation axis of the microswimmer coincides with centroid of the simplified isosceles triangle (red dash lines). The red dot refers to the projection of the rotation axis as well as the centroid with coordinate (\(x_c\),\(y_c\)). b Rotation axis of the microswimmer coincides with the longest side of the simplified isosceles triangle. c Dynamic behaviors with pose angle \(\varphi\) and distance \(d_m\). Black arrows and dash lines refer to the angular velocity and rotation axis of the microswimmer, respectively. b, c Have the same coordinate
Distribution of the value of \(\sum _{i=1}^{3}d^2_i\). The minimum and maximum values are marked with the corresponding rotation poses. The two insets have the same coordinate
Simulation of the microswimmer rotates above a no-slip wall. Rotation axis is defined as Z-axis; numbers represent dimensions in micrometer. The microswimmer with two different boundary conditions is modeled a \(0.75 \,\upmu \hbox {m}\) and d \(20.75 \,\upmu \hbox {m}\) above the no-slip wall, as shown in the insets of a and d, respectively. a, b The streamlines generated by the microswimmer. b, c, e, f Pressure induced by rotation of the microswimmer at a frequency of 10 Hz in a plane \(0.5 \,\upmu \hbox {m}\) below b, e and above c, f the microswimmer, b, c are with the microswimmer \(0.75 \,\upmu \hbox {m}\) above the no-slip wall, and e, f are \(20.75 \,\upmu \hbox {m}\) above the no-slip wall. The color legends in the main frame illustrate the magnitude of pressure (Pa). The red areas of insets in b, e are with pressure higher than 0.1 Pa, and those in e, f are higher than 0.04 Pa. b, c, e, f with all insets have the same coordinate
Simulation results of the pressure distribution. Pressure distribution near the microswimmer. Three lines denotes the pressure near the three particles in blue, green and red, respectively
Magnetic actuation setup. Three-axis Helmholtz electromagnetic coils are applied for generating rotating magnetic field. A camera is mounted on the top of a light microscope for video recording. The actuation setup is controlled by using a PC and a controller box with three amplifiers and one power supply inside
a Time-lapse images of the three-particle magnetic microswimmer which is actuated with simple rotation under a rotating magnetic field at a frequency of 7 Hz. b The microswimmer is actuated with dynamic behaviors (0–2 s) and steady propulsion (2–29 s) under a rotating magnetic field at a frequency of 9 Hz. At 0–2 s, the focal plane is on the substrate, and then, the focal plane is adjusted to a plane 20 \(\upmu \hbox {m}\) (3–14 s) and 40 \(\upmu \hbox {m}\) (15–29 s) above the substrate, respectively. Blue arrows refer to the rotation direction of the microswimmer. Scale bar is 10 \(\upmu \hbox {m}\); all the images in a, b have the same scale bar
Velocity of the microswimmer against frequency. Velocity of the microswimmer against frequency of the applied magnetic field. Rotation axis is perpendicular to the solid surface (\(\alpha =0^{\circ }\)). Blue and black lines are the velocity in bottom space (0–20 \(\upmu \hbox {m}\) above the substrate) and upper space (20–40 \(\upmu \hbox {m}\) above the substrate), respectively. Error bars denote the standard errors from observation time that is used to calculate velocity
Steering of the microswimmer. Swimming trajectory (red line) of the microswimmer in a plane 25 \(\upmu \hbox {m}\) above the substrate (top view). Blue arrow refers to the swimming direction, and red rectangular is applied for tracking the position of the microswimmer. Scale bar is 10 \(\upmu \hbox {m}\)
Velocity of the microswimmer near a solid surface. Swimming and drift velocity of the microswimmer with rotation axis parallel to the solid surface (\(\alpha =90^{\circ }\)). The microswimmer exhibits dynamic behaviors with increasing frequency. The error is from the pixel size of the camera and software Image J
Sitti M, Ceylan H, Hu W, Giltinan J, Turan M, Yim S, Diller E. Biomedical applications of untethered mobile milli/microrobots. Proc IEEE. 2015;103(2):205–24.
Nelson BJ, Kaliakatsos IK, Abbott JJ. Microrobots for minimally invasive medicine. Annu Rev Biomed Eng. 2010;12:55–85.
Vikram Singh A, Sitti M. Targeted drug delivery and imaging using mobile milli/microrobots: A promising future towards theranostic pharmaceutical design. Curr Pharm Des. 2016;22(11):1418–28.
Yan X, Zhou Q, Yu J, Xu T, Deng Y, Tang T, Feng Q, Bian L, Zhang Y, Ferreira A, et al. Magnetite nanostructured porous hollow helical microswimmers for targeted delivery. Adv Funct Mater. 2015;25(33):5333–42.
Kummer MP, Abbott JJ, Kratochvil BE, Borer R, Sengul A, Nelson BJ. Octomag: an electromagnetic system for 5-dof wireless micromanipulation. IEEE Trans Robot. 2010;26(6):1006–17.
Abbott JJ, Peyer KE, Lagomarsino MC, Zhang L, Dong L, Kaliakatsos IK, Nelson BJ. How should microrobots swim? Int J Robot Res. 2009;28(11–12):1434–47.
Peyer KE, Tottori S, Qiu F, Zhang L, Nelson BJ. Magnetic helical micromachines. Chem-A Eur J. 2013;19(1):28–38.
Peyer KE, Zhang L, Nelson BJ. Bio-inspired magnetic swimming microrobots for biomedical applications. Nanoscale. 2013;5(4):1259–72.
Yu J, Xu T, Lu Z, Vong CI, Zhang L. On-demand disassembly of paramagnetic nanoparticle chains for microrobotic cargo delivery. IEEE Trans Robot. 2017;33(5). https://doi.org/10.1109/TRO.2017.2693999.
Yang L, Wang Q, Vong C-I, Zhang L. A miniature flexible-link magnetic swimming robot with two vibration modes: design, modeling and characterization. IEEE Robot Autom Lett. 2017;2(4):2024–31.
Zhang L, Abbott JJ, Dong L, Kratochvil BE, Bell D, Nelson BJ. Artificial bacterial flagella: fabrication and magnetic control. Appl Phys Lett. 2009;94(6):064107.
Zhang L, Abbott JJ, Dong L, Peyer KE, Kratochvil BE, Zhang H, Bergeles C, Nelson BJ. Characterizing the swimming properties of artificial bacterial flagella. Nano Lett. 2009;9(10):3663–7.
Tottori S, Zhang L, Qiu F, Krawczyk KK, Franco-Obregón A, Nelson BJ. Magnetic helical micromachines: fabrication, controlled swimming, and cargo transport. Adv Mater. 2012;24(6):811–6.
Tottori S, Zhang L, Peyer KE, Nelson BJ. Assembly, disassembly, and anomalous propulsion of microscopic helices. Nano Lett. 2013;13(9):4263–8.
Zhang L, Peyer KE, Nelson BJ. Artificial bacterial flagella for micromanipulation. Lab Chip. 2010;10(17):2203–15.
Ghosh A, Fischer P. Controlled propulsion of artificial magnetic nanostructured propellers. Nano Lett. 2009;9(6):2243–5.
Fischer P, Ghosh A. Magnetically actuated propulsion at low Reynolds numbers: towards nanoscale control. Nanoscale. 2011;3(2):557–63.
Maier AM, Weig C, Oswald P, Frey E, Fischer P, Liedl T. Magnetic propulsion of microswimmers with DNA-based flagellar bundles. Nano Lett. 2016;16(2):906–10.
Helgesen G, Pieranski P, Skjeltorp AT. Nonlinear phenomena in systems of magnetic holes. Phys Rev Lett. 1990;64(12):1425.
Helgesen G, Pieranski P, Skjeltorp A. Dynamic behavior of simple magnetic hole systems. Phys Rev A. 1990;42(12):7271.
Sandre O, Browaeys J, Perzynski R, Bacri J-C, Cabuil V, Rosensweig R. Assembly of microscopic highly magnetic droplets: magnetic alignment versus viscous drag. Phys Rev E. 1999;59(2):1736.
Gao W, Feng X, Pei A, Kane CR, Tam R, Hennessy C, Wang J. Bioinspired helical microswimmers based on vascular plants. Nano Lett. 2013;14(1):305–10.
Mahoney AW, Nelson ND, Peyer KE, Nelson BJ, Abbott JJ. Behavior of rotating magnetic microrobots above the step-out frequency with application to control of multi-microrobot systems. Appl Phys Lett. 2014;104(14):144101.
Ghosh A, Paria D, Singh HJ, Venugopalan PL, Ghosh A. Dynamical configurations and bistability of helical nanostructures under external torque. Phys Rev E. 2012;86(3):031401.
Vach PJ, Brun N, Bennet M, Bertinetti L, Widdrat M, Baumgartner J, Klumpp S, Fratzl P, Faivre D. Selecting for function: solution synthesis of magnetic nanopropellers. Nano Lett. 2013;13(11):5373–8.
Vach PJ, Fratzl P, Klumpp S, Faivre D. Fast magnetic micropropellers with random shapes. Nano Lett. 2015;15(10):7064–70.
Kei Cheang U, Lee K, Julius AA, Kim MJ. Multiple-robot drug delivery strategy through coordinated teams of microswimmers. Appl Phys Lett. 2014;105(8):083705.
Cheang UK, Meshkati F, Kim D, Kim MJ, Fu HC. Minimal geometric requirements for micropropulsion via magnetic rotation. Phys Rev E. 2014;90(3):033007.
Morozov KI, Mirzae Y, Kenneth O, Leshansky AM. Dynamics of arbitrary shaped propellers driven by a rotating magnetic field. Phys Rev Fluids. 2017;2(4):044202.
Lauga E, DiLuzio WR, Whitesides GM, Stone HA. Swimming in circles: motion of bacteria near solid boundaries. Biophys J. 2006;90(2):400–12.
Lauga E, Powers TR. The hydrodynamics of swimming microorganisms. Rep Prog Phys. 2009;72(9):096601.
Riedel IH, Kruse K, Howard J. A self-organized vortex array of hydrodynamically entrained sperm cells. Science. 2005;309(5732):300–3.
Peyer KE, Zhang L, Kratochvil BE, Nelson BJ. Non-ideal swimming of artificial bacterial flagella near a surface. In: Robotics and Automation (ICRA), 2010 IEEE International Conference on. 2010, pp. 96–101.
Ledesma-Aguilar R, Yeomans JM. Enhanced motility of a microswimmer in rigid and elastic confinement. Phys Rev Lett. 2013;111(13):138101.
Zhou Q, Petit T, Choi H, Nelson BJ, Zhang L. Dumbbell fluidic tweezers for dynamical trapping and selective transport of microobjects. Adv Funct Mater. 2017;27(1). https://doi.org/10.1002/adfm.201604571.
Happel J, Brenner H. Low Reynolds number hydrodynamics: with special applications to particulate media, vol. 1. Berlin: Springer; 1983.
MATH Google Scholar
Biswal SL, Gast AP. Rotational dynamics of semiflexible paramagnetic particle chains. Phys Rev E. 2004;69(4):041406.
Doi M, Edwards SF. The theory of polymer dynamics, vol. 73. Clarendon: Oxford University Press; 1988.
Liu B, Breuer KS, Powers TR. Propulsion by a helical flagellum in a capillary tube. Phys Fluids. 2014;26(1):011701.
Wu G. Fluid impact on a solid boundary. J Fluids Struct. 2007;23(5):755–65.
Morozov KI, Leshansky AM. Dynamics and polarization of superparamagnetic chiral nanomotors in a rotating magnetic field. Nanoscale. 2014;6(20):12142–50.
Brennen C, Winet H. Fluid mechanics of propulsion by cilia and flagella. Annu Rev Fluid Mech. 1977;9(1):339–98.
QW designed experiments, built the analytical model and simulation, as well as drafted the manuscript. QW and JY performed the experiments. LY designed the magnetic actuated system. LZ supervised the project and made contributions to the revision of the draft. Part of the work will be presented at the 2017 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO2017). All authors read and approved the final manuscript
Acknowlegements
The authors thank D. D. Jin (Chinese University of Hong Kong) for the fruitful discussions.
This paper is supported by the Early Career Scheme (ECS) grant with Project No. 439113, the General Research Fund (GRF) with Project No. 14209514, 14203715 and 14218516 from the Research Grants Council (RGC), the ITF project with Project No. ITS/231/15 funded by the HKSAR Innovation and Technology Commission (ITC) and the National Natural Science Funds of China for Young Scholar with Project No. 61305124.
Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
Qianqian Wang, Lidong Yang, Jiangfan Yu & Li Zhang
Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, 518172, China
Qianqian Wang
Lidong Yang
Jiangfan Yu
Correspondence to Li Zhang.
40638_2017_76_MOESM1_ESM.mp4
Additional file 1. This video demonstrates the microswimmer actuated under a rotating magnetic field at a frequency of 7 Hz and 9 Hz, and magnetic steering of the microswimmer.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wang, Q., Yang, L., Yu, J. et al. Characterizing dynamic behaviors of three-particle paramagnetic microswimmer near a solid surface. Robot. Biomim. 4, 20 (2017). https://doi.org/10.1186/s40638-017-0076-0
Swimming microrobot
Boundary effect
Low Reynolds number
Dynamic behavior | CommonCrawl |
Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system
D. Mukherjee, A. Stolyar
Research output: Contribution to journal › Article › Academic
We consider the model of a token-based joint auto-scaling and load balancing strategy, proposed in a recent paper by Mukherjee, Dhara, Borst, and van Leeuwaarden (SIGMETRICS '17), which offers an efficient scalable implementation and yet achieves asymptotically optimal steady-state delay performance and energy consumption as the number of servers $N\to\infty$. In the above work, the asymptotic results are obtained under the assumption that the queues have fixed-size finite buffers, and therefore the fundamental question of stability of the proposed scheme with infinite buffers was left open. In this paper, we address this fundamental stability question. The system stability under the usual subcritical load assumption is not automatic. Moreover, the stability may not even hold for all $N$. The key challenge stems from the fact that the process lacks monotonicity, which has been the powerful primary tool for establishing stability in load balancing models. We develop a novel method to prove that the subcritically loaded system is stable for large enough $N$, and establish convergence of steady-state distributions to the optimal one, as $N \to \infty$. The method goes beyond the state of the art techniques -- it uses an induction-based idea and a "weak monotonicity" property of the model; this technique is of independent interest and may have broader applicability.
math.PR
cs.PF
Authors version Arxiv
Fingerprint Dive into the research topics of 'Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system'. Together they form a unique fingerprint.
Join Mathematics
Queue Mathematics
Elasticity Mathematics
Load Balancing Mathematics
Monotonicity Mathematics
Energy Consumption Mathematics
Finite Buffer Mathematics
Steady-state Distribution Mathematics
Mukherjee, D., & Stolyar, A. (2018). Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system. arXiv.
Mukherjee, D. ; Stolyar, A. / Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system. In: arXiv. 2018.
@article{c24143a110c24bc88e2520ec581b1e32,
title = "Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system",
abstract = "We consider the model of a token-based joint auto-scaling and load balancing strategy, proposed in a recent paper by Mukherjee, Dhara, Borst, and van Leeuwaarden (SIGMETRICS '17), which offers an efficient scalable implementation and yet achieves asymptotically optimal steady-state delay performance and energy consumption as the number of servers $N\to\infty$. In the above work, the asymptotic results are obtained under the assumption that the queues have fixed-size finite buffers, and therefore the fundamental question of stability of the proposed scheme with infinite buffers was left open. In this paper, we address this fundamental stability question. The system stability under the usual subcritical load assumption is not automatic. Moreover, the stability may not even hold for all $N$. The key challenge stems from the fact that the process lacks monotonicity, which has been the powerful primary tool for establishing stability in load balancing models. We develop a novel method to prove that the subcritically loaded system is stable for large enough $N$, and establish convergence of steady-state distributions to the optimal one, as $N \to \infty$. The method goes beyond the state of the art techniques -- it uses an induction-based idea and a {"}weak monotonicity{"} property of the model; this technique is of independent interest and may have broader applicability.",
keywords = "math.PR, cs.PF",
author = "D. Mukherjee and A. Stolyar",
note = "30 pages",
journal = "arXiv",
publisher = "Cornell University Library",
Mukherjee, D & Stolyar, A 2018, 'Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system', arXiv.
Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system. / Mukherjee, D.; Stolyar, A.
In: arXiv, 03.2018.
T1 - Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system
AU - Mukherjee, D.
AU - Stolyar, A.
N1 - 30 pages
N2 - We consider the model of a token-based joint auto-scaling and load balancing strategy, proposed in a recent paper by Mukherjee, Dhara, Borst, and van Leeuwaarden (SIGMETRICS '17), which offers an efficient scalable implementation and yet achieves asymptotically optimal steady-state delay performance and energy consumption as the number of servers $N\to\infty$. In the above work, the asymptotic results are obtained under the assumption that the queues have fixed-size finite buffers, and therefore the fundamental question of stability of the proposed scheme with infinite buffers was left open. In this paper, we address this fundamental stability question. The system stability under the usual subcritical load assumption is not automatic. Moreover, the stability may not even hold for all $N$. The key challenge stems from the fact that the process lacks monotonicity, which has been the powerful primary tool for establishing stability in load balancing models. We develop a novel method to prove that the subcritically loaded system is stable for large enough $N$, and establish convergence of steady-state distributions to the optimal one, as $N \to \infty$. The method goes beyond the state of the art techniques -- it uses an induction-based idea and a "weak monotonicity" property of the model; this technique is of independent interest and may have broader applicability.
AB - We consider the model of a token-based joint auto-scaling and load balancing strategy, proposed in a recent paper by Mukherjee, Dhara, Borst, and van Leeuwaarden (SIGMETRICS '17), which offers an efficient scalable implementation and yet achieves asymptotically optimal steady-state delay performance and energy consumption as the number of servers $N\to\infty$. In the above work, the asymptotic results are obtained under the assumption that the queues have fixed-size finite buffers, and therefore the fundamental question of stability of the proposed scheme with infinite buffers was left open. In this paper, we address this fundamental stability question. The system stability under the usual subcritical load assumption is not automatic. Moreover, the stability may not even hold for all $N$. The key challenge stems from the fact that the process lacks monotonicity, which has been the powerful primary tool for establishing stability in load balancing models. We develop a novel method to prove that the subcritically loaded system is stable for large enough $N$, and establish convergence of steady-state distributions to the optimal one, as $N \to \infty$. The method goes beyond the state of the art techniques -- it uses an induction-based idea and a "weak monotonicity" property of the model; this technique is of independent interest and may have broader applicability.
KW - math.PR
KW - cs.PF
JO - arXiv
JF - arXiv
Mukherjee D, Stolyar A. Join-idle-queue with service elasticity: large-scale asymptotics of a non-monotone system. arXiv. 2018 Mar. | CommonCrawl |
journal of exposure science & environmental epidemiology
A framework for estimating the US mortality burden of fine particulate matter exposure attributable to indoor and outdoor microenvironments
Estimating ambient-origin PM2.5 exposure for epidemiology: observations, prediction, and validation using personal sampling in the Multi-Ethnic Study of Atherosclerosis
Kristin A. Miller, Elizabeth W. Spalt, … Joel D. Kaufman
Association of short-term exposure to fine particulate air pollution and mortality: effect modification by oxidant gases
Eric Lavigne, Richard T. Burnett & Scott Weichenthal
Using low-cost sensor technologies and advanced computational methods to improve dose estimations in health panel studies: results of the AIRLESS project
Lia Chatzidiakou, Anika Krause, … Roderic L. Jones
Evaluation of indoor PM2.5 concentrations in a Native American Community: a pilot study
Nan Ji, Ana M. Rule, … Katherine L. O'Brien
Ambient and controlled exposures to particulate air pollution and acute changes in heart rate variability and repolarization
Susanne Breitner, Annette Peters, … David Q. Rich
PM2.5 air pollution contributes to the burden of frailty
Wei-Ju Lee, Ching-Yi Liu, … Liang-Kung Chen
Long-term exposure to high particulate matter pollution and incident hypertension: a 12-year cohort study in northern China
Chaokang Li, Yaoyan Li, … Nai-jun Tang
Estimation of stillbirths attributable to ambient fine particles in 137 countries
Tao Xue, Mingkun Tong, … Tong Zhu
The health and social implications of household air pollution and respiratory diseases
Suzanne M. Simkovich, Dina Goodman, … William Checkley
Parham Azimi1 &
Brent Stephens1
Journal of Exposure Science & Environmental Epidemiology volume 30, pages 271–284 (2020)Cite this article
Exposure to fine particulate matter (PM2.5) is associated with increased mortality. Although epidemiology studies typically use outdoor PM2.5 concentrations as surrogates for exposure, the majority of PM2.5 exposure in the US occurs in microenvironments other than outdoors. We develop a framework for estimating the total US mortality burden attributable to exposure to PM2.5 of both indoor and outdoor origin in the primary non-smoking microenvironments in which people spend most of their time. The framework utilizes an exposure-response function combined with adjusted mortality effect estimates that account for underlying exposures to PM2.5 of outdoor origin that likely occurred in the original epidemiology populations from which effect estimates are derived. We demonstrate the framework using several different scenarios to estimate the potential magnitude and bounds of the US mortality burden attributable to total PM2.5 exposure across all non-smoking environments under a variety of assumptions. Our best estimates of the US mortality burden associated with total PM2.5 exposure in the year 2012 range from ~230,000 to ~300,000 deaths. Indoor exposure to PM2.5 of outdoor origin is typically the largest total exposure, accounting for ~40–60% of total mortality, followed by residential exposure to indoor PM2.5 sources, which also drives the majority of variability in each scenario.
Elevated outdoor concentrations of fine particulate matter (i.e., the mass concentration of particles ≤ 2.5 µm in aerodynamic diameter; PM2.5) have been consistently associated with increased mortality in numerous epidemiology studies [1,2,3,4,5,6,7,8,9]. Although epidemiology studies typically use centrally monitored outdoor PM2.5 concentrations as surrogates for average human exposures to PM2.5 of outdoor origin, the majority of exposure to PM2.5 of outdoor origin in the US and other industrialized nations typically occurs in various other microenvironments, including inside residences, offices, schools, and vehicles [10,11,12,13,14,15,16]. This is because people spend the majority of their time in microenvironments other than outdoors [17, 18] and outdoor PM2.5 can infiltrate and persist into different microenvironments with varying efficiencies [19,20,21,22,23,24]. There are also many PM2.5 sources present in non-smoking indoor microenvironments, including cooking [25,26,27], burning incense and candles [28, 29], operating office equipment [30, 31], resuspension from settled dust from human activities such as walking and cleaning [32, 33], and secondary organic aerosols from oxidation reactions [34]. To date, the vast majority of air pollution epidemiology studies and quantitative risk assessments have not explicitly accounted for these varied microenvironmental exposures [35, 36].
The objective of this work is to develop a framework for estimating the total US mortality burden attributable to exposure to PM2.5 of both indoor and outdoor origin in the primary non-smoking microenvironments in which people spend most of their time. The framework primarily utilizes a modified version of an exposure-response function commonly used for air pollution risk assessment combined with adjusted mortality effect estimates that account for estimates of underlying microenvironmental exposures to PM2.5 of outdoor origin that likely occurred in prior epidemiology cohort studies. We demonstrate the utility of the framework by conducting several scenario analyses to estimate the likely magnitude and bounds of the US mortality burden associated with long-term PM2.5 exposures that result from both indoor and outdoor PM sources in each microenvironment. While no single model scenario is considered to be the definitive representation of the US mortality burden of microenvironmental PM2.5 exposures due to unique data limitations in each case, each model scenario offers insight into how the framework can be used with richer data sets in the future to refine nationwide mortality estimates and ultimately to inform policy decisions to reduce exposures in the microenvironments in which they most often occur.
Selection of an appropriate exposure-response function
Integral to the model framework is the selection of an appropriate health impact function. A number of recent air pollution risk assessments have estimated mortality and/or morbidity associated with ambient PM2.5 exposure in various locations using different forms of health impact functions and associated effect estimates derived from epidemiology studies. Historically, most studies have used a variant of a generic exposure-response health impact function for ambient air pollution [37] to estimate a population's change in health endpoint (Δyi) due to a change in the assumed population-average exposure to pollutant i (ΔEi) (e.g., Eq. 1).
$$\Delta y_i = y_0\left[ {{\mathrm{exp}}\left( {\beta _i \times \Delta E_i} \right) - 1} \right]Pop$$
where y0 is the annual baseline prevalence of illness (per person per year), βi is the health endpoint effect estimate for pollutant i resulting from prior epidemiology studies (e.g., per μg/m3 of pollutant i), ΔEi is the change in exposure concentration relative to an assumed baseline or threshold concentration (e.g., μg/m3 of pollutant i, typically assuming outdoor concentrations are surrogates for exposure), and Pop is size of the affected population. This approach has been used recently to estimate the mortality burden associated with outdoor PM2.5 concentrations in the US [38,39,40,41,42] and globally [43, 44]. For example, Fann et al. (2017) [42] used this approach with all-cause mortality effect estimates from Krewski et al. (2009) [5] to estimate that ~120,000 deaths (95% CI: 83,000–160,000) were associated with outdoor PM2.5 exposures in the in 2010. Fann et al. (2017) [42] also made another estimate of ~200,000 (95% CI: 43,000–1,100,000) deaths associated with outdoor PM2.5 using a different model form and effect estimates from Nasari et al. (2016) [45]. Similar approaches have also recently been extended to estimate the chronic health burden associated with long-term indoor PM exposures using effect estimates taken directly from the outdoor air epidemiology literature [46,47,48,49,50].
Another widely used approach to air pollution risk assessment is the Global Burden of Disease (GBD) study's integrated exposure-response (IER) methodology [51,52,53,54,55,56], and its follow-up Global Exposure Mortality Model (GEMM) [57], which were developed in part because the generic expression in Eq. 1 is based on epidemiology cohort studies in the US and Europe with outdoor PM2.5 concentrations (typically below 30 µg/m3) that may not be representative for countries with much higher ambient air pollution levels [53] or for other, higher, PM2.5 exposures such as secondhand or active smoking. Here we primarily utilize a modified version of the generic exposure-response health impact function in Eq. 1 for the model framework because (a) it was developed for use with epidemiology studies with PM2.5 concentrations within the range of concern in non-smoking indoor and outdoor microenvironments in the US, (b) there is considerable uncertainty in the shape of the GBD IER function and its fitted parameters at lower PM2.5 concentrations most relevant to this study, and (c) it has been used successfully in other recent indoor microenvironmental exposure investigations. However, we also apply the IER model and evaluate its utility in the SI.
Modifying the exposure-response function
We modify the exposure-response function in Eq. 1 for PM2.5 in a manner similar to that in Logue et al. (2012) [48] to account for microenvironmental PM2.5 concentrations and exposures, albeit with a few additional modifications as shown in Eq. 2. First, we introduce a modified form of βi for ambient-generated PM2.5 (i.e., βPM2.5,AG,modified) to account for estimates of the underlying long-term average exposures to PM2.5 of outdoor origin that likely occurred in various microenvironments in the cohort populations used in the original epidemiology studies from which βPM2.5 was derived. This modification provides an adjusted effect estimate for outdoor PM2.5 based on estimates of long-term average microenvironmental exposures that can be more universally applied to other microenvironmental exposure estimates rather than using outdoor PM2.5 concentrations alone as a surrogate for exposure.
Second, we separately account for long-term average PM2.5 exposures above an assumed threshold concentration in each microenvironment j that result from ambient-generated sources (ΔCPM2.5,AG,j) and indoor-generated sources (ΔCPM2.5,IG,j). Third, tj accounts for the average fraction of time spent in a particular microenvironment j. Thus, the sums of ΔCPM2.5,AG,j × tj and ΔCPM2.5,IG,j × tj across all microenvironments more realistically account for total PM2.5 exposure (ΔEPM2.5) from both indoor and outdoor sources. Finally, we also allow for using different assumptions for modified mortality effect estimates for ambient-generated and indoor-generated PM2.5 (i.e., βPM2.5,AG,modified and βPM2.5,IG,modified, respectively). Although the framework can account for varying toxicity of ambient- and indoor-generated PM2.5, we assume equal toxicity here because of conflicting conclusions among the limited number of studies that have investigated differential toxicity using paired indoor, outdoor, and/or personal PM samples [58,59,60,61,62,63].
$$\Delta y_{PM2.5} = y_0\left[ \exp \left( \beta _{PM2.5,IG,modified} {\!} \times {\!} \mathop {\sum }\limits_j (\Delta C_{PM2.5,IG,j} {\!} \times {\!} t_j) \right.\right. \\ + \left. \left. \beta _{PM2.5,AG,modified} \times \mathop {\sum }\limits_j (\Delta C_{PM2.5,AG,j} \times t_j) \right) - 1 \right]Pop$$
We consider four main microenvironments in which people are exposed to PM2.5 of both indoor and outdoor origin: (i) inside residences, (ii) inside indoor environments other than residences (e.g., schools, business, restaurants, etc.), (iii) inside vehicles, and (iv) outdoors. Equation 3 shows modified forms of the Σ(ΔCPM2.5,IG,j×tj) and Σ(ΔCPM2.5,AG,j×tj) terms in Eq. 2 that account for the long-term average PM2.5 concentrations resulting from both indoor and outdoor sources and the average fraction of time spent inside each of these four primary microenvironments.
$$\begin{array}{l}\mathop {\sum }\limits_j (\Delta C_{PM2.5,IG,j} \times t_j) = \left( {\Delta C_{PM2.5,IG,residences} \times t_{residences}} \right)\\ + \left( {\Delta C_{PM2.5,IG,other\,indoor} \times t_{other\,indoor}} \right)\end{array}$$
(3a)
$$\begin{array}{l}\mathop {\sum }\limits_j (\Delta C_{PM2.5,AG,j} \times t_j) = \left( {\Delta C_{PM2.5,AG,residences} \times t_{residences}} \right) \\ + \left( {\Delta C_{PM2.5,AG,other\,indoor} \times t_{other\,indoor}} \right)\\ + \left( {\Delta C_{PM2.5,AG,vehicles} \times t_{vehicles}} \right) + (\Delta C_{PM2.5,outdoor} {\!} \times {\!} t_{outdoor})\end{array}$$
(3b)
where ΔCPM2.5,IG,residences and ΔCPM2.5,IG,other indoor are the differences in long-term average concentrations of indoor-generated PM2.5 in non-smoking residences and all other non-smoking indoor environments other than residences, respectively, both compared to a baseline value in which there are no indoor PM2.5 sources (μg/m3); ΔCPM2.5,AG,residences, ΔCPM2.5,AG,other indoor, and ΔCPM2.5,AG,vehicles are the differences in long-term average concentrations of ambient-generated PM2.5 in residences, indoor environments other than residences, and vehicles, respectively, compared to a baseline value (μg/m3); ΔCPM2.5,outdoor is the difference in long-term average outdoor PM2.5 concentrations also compared to a baseline value (μg/m3); and tresidences, tother indoor, tvehicles, and toutdoor are the long-term average fractions of time spent inside each microenvironment, respectively. Note that Eq. 3a assumes there are no indoor sources of PM2.5 inside vehicles, primarily because of a lack of comprehensive surveys of in-vehicle PM sources, although several studies have shown that in-vehicle PM2.5 exposures can be higher than the near-roadway exposures in some circumstances [64, 65].
Modifying effect estimates for PM2.5 of outdoor origin
Data from the 1992–1994 National Human Activity Pattern Survey (NHAPS) showed that, on average, people in the US spent 68.7% of their time in residences, 18.2% of their time in indoor locations other than residences (e.g., offices, factories, bars, schools, and restaurants), 5.5% of their time in vehicles, and 7.6% of their time outdoors [17]. Therefore, historically observed associations between outdoor PM2.5 concentrations and adverse health outcomes can reasonably be expected to have indirectly accounted for the underlying exposures to PM2.5 of outdoor origin that infiltrates and persists in these various microenvironments [66]. Failing to account for these underlying exposures to PM2.5 of outdoor origin in different microenvironments can lead to exposure misclassification and errors in effect estimates [35, 67,68,69,70,71,72,73,74,75,76,77,78,79,80]. To account for this phenomenon, we developed a modified mortality effect estimate for PM2.5 of outdoor origin (i.e., βPM2.5,AG,modified) based on the average fraction of PM2.5 of outdoor origin that infiltrates and persists in each assumed microenvironment (i.e., the infiltration factor) combined with the average fraction of time spent in each microenvironment, as shown in Eq. 4.
$$\beta _{PM2.5,AG,modified} = \frac{{\beta _{PM2.5}}}{{{\mathrm{\Sigma }}F_jt_j}}$$
where βPM2.5 is the mortality effect estimate for outdoor PM2.5 from epidemiology studies that used outdoor concentrations as surrogates for average population exposure to outdoor PM2.5, Fj is the average PM2.5 infiltration factor for microenvironment j, and tj is the fraction of time spent in each microenvironment j. ΣFj×tj is estimated using Eq. 5, which represents a weighted average of the product of the infiltration factors and fractional time spent in each of the four microenvironments used herein.
$${\mathrm{\Sigma }}F_j \times t_j = (F \times t)_{outdoor} + (F \times t)_{residence} + (F \times t)_{vehicle} \\ + (F \times t)_{other\,indoor}$$
We estimate a mean value of ΣFj×tj to be ~0.60 for the US population using a number of data sources as described in the SI. Although there would be variability in this value for each individual in a particular population included in a cohort study, this value is assumed to be broadly applicable as a reasonable estimate of the population-average value.
Applying the model framework: scenario analyses
We apply the model framework using MATLAB to estimate the magnitude and bounds of the US mortality burden of long-term average total PM2.5 exposures that result from indoor and outdoor PM sources in all non-smoking microenvironments. We define two primary scenarios that involve different assumptions and data sources for key input parameters, including: (i) a nationwide estimate based primarily on data from field measurements (where possible) and nationwide distributions of model input parameters; and (ii) a nationwide estimate based primarily on regionally varying modeled microenvironmental PM2.5 concentrations and other regionally varying model input parameters (where possible). A third scenario involves an application of the GBD IER model for comparison purposes; methods and results are included in the SI (although we have limited confidence in the approach for a number of reasons as described in the SI). Each model scenario is constructed to yield insight into how the framework can be used to generate mortality estimates and attribute them to microenvironmental exposures, while also highlighting unique data limitations present within each set of scenario assumptions.
For both Scenario 1 and 2, we use a central pooled estimate of RR for the increase in long-term all-cause mortality associated with outdoor PM2.5 concentrations in the US of 7.3% per 10 µg/m3 (95% CI: 3.7–11%) as reported in a recent quantitative meta-analysis of outdoor PM2.5 C-R functions [39]. We convert the pooled RR estimate of 1.073 per 10 µg/m3 to an effect estimate (i.e., βPM2.5) of 0.0070 (95% CI: 0.0036–0.0104), where βPM2.5 = ln(RR)/10 [81]. We fit a Weibull distribution to these reported values, resulting in a mean (±SD) value of βPM2.5 = 0.0070 (± 0.0016) per µg/m3 with distribution shape factors of α = 0.765 and β = 4.95. A Weibull distribution was used because it yields a distribution that is very close to normal in shape, but does not produce any negative values. Moreover, we estimate βPM2.5,AG,modified to be ~0.0117 per µg/m3 using Eq. 4 (i.e., 0.0070 divided by 0.6) with a 95% CI of 0.0060–0.0174 per µg/m3. This modified effect estimate for all-cause mortality associated with outdoor PM2.5 represents a more generalizable effect estimate that accounts for the population-average locations and durations in which people are likely exposed to PM2.5 of outdoor origin.
Scenario 1: Nationwide estimate based primarily on prior field studies
In Scenario 1, we estimated the mortality burden for the adult population 35 years and older using nationwide distributions of model inputs. We assumed a national annual average outdoor PM2.5 concentration of 9.1 µg/m3 with 10th and 90th percentiles of 6.6 and 11.2 µg/m3, respectively, taken from the EPA's nationwide monitoring network data for the year 2012 [82]. The year 2012 was chosen because it was the year for which we had the most comprehensive national (Scenario 1) and regional (Scenario 2) estimates for indoor and outdoor PM2.5 concentrations. We fit a lognormal distribution through the reported arithmetic mean and percentiles to construct a distribution from which to sample (GM = 8.84 µg/m3 and GSD = 1.246). We assumed a baseline (i.e., threshold) PM2.5 concentration of zero in each microenvironment, which is consistent with other recent applications of the core health impact function used in this scenario [42, 43] and with a number of studies that suggest there is no evidence of a population threshold in the relationship between long-term exposure to ambient PM2.5 and mortality [83,84,85,86]. We assumed that the 2012 nationwide population (Pop) and mortality rate (y0) for persons 35 years and older were 166,516,716 and 1.463 × 10−2 per person per year, respectively, using data from the CDC WONDER system [87].
We used Monte Carlo simulations with 10,000 iterations to sample from what we assumed for the purposes of Scenario 1 to be nationally representative distributions of every other model input parameter, including modified PM2.5 mortality effect estimates (described previously), time-activity patterns, and estimates of long-term average PM2.5 concentrations of both indoor and outdoor origin in each microenvironment taken largely from prior field measurements. There are three versions of Scenario 1, each of which involved sampling from different distributions to estimate residential PM2.5 concentrations of both indoor and outdoor origin. We sampled data from (i) the Relationship of Indoor, Outdoor and Personal Air (RIOPA) [13] and (ii) the Multi-Ethnic Study of the Atherosclerosis and Air Pollution (MESA Air) [18, 19] studies independently, as well as (iii) both RIOPA and MESA equally. Briefly, the RIOPA study measured indoor and outdoor PM2.5 concentrations concurrently for 48 h in 212 non-smoking residences in three US cities, while MESA Air measured indoor and outdoor PM2.5 concentrations concurrently over a 2-week period in 208 homes in warm seasons and 264 homes in cold seasons in seven US cities. Crucially, subsequent analyses of both data sets reported distributions of PM2.5 infiltration factors, which can be used to estimate the relative contributions of both indoor and outdoor sources to indoor PM2.5 concentrations in the sample residences. Although a few other studies have also explicitly measured indoor concentrations of PM2.5 in US residences resulting from indoor and outdoor sources, including a study of 294 inner-city homes of children with asthma in seven cities [27] and 68 smoking and non-smoking homes in six cities [88], we chose to rely on the RIOPA and MESA Air studies because they included large sample sizes of non-smoking homes occupied by adults in multiple US cities.
All relevant model inputs and data sources for Scenario 1 are summarized in full in the SI. Each model iteration represents a population-level estimate of total mortality summed across all microenvironmental exposures; thus, the central tendency of the model output provides the most likely estimate of the magnitude of the total mortality associated with PM2.5 exposure and the output range informs the likely bounds of that estimate. In all microenvironments, if a sampled value of a microenvironmental PM2.5 concentration was a negative value, it was replaced with zero.
Scenario 2: Nationwide estimate based on regional model outputs
In Scenario 2, we similarly applied the model framework to make a nationwide estimate of the total mortality burden attributable to microenvironmental PM2.5 exposures, albeit using regional assumptions for some input parameters for which regional data were available, including population demographics, baseline over-35 adult mortality rates, outdoor PM2.5 concentrations, and, importantly, residential indoor PM2.5 concentrations of both indoor and outdoor origin. We used the same nationwide distributions of time-activity patterns and all non-residential indoor microenvironmental PM2.5 concentrations from Scenario 1 because we are not aware of any robust regional data sets for these parameters. However, given that the Scenario 1 analysis demonstrated the sensitivity of the model to assumptions for residential exposures, and given that other air pollution risk assessments have shown the utility of using geographically varying population demographics and mortality rates [38, 42], we consider Scenario 2 a reasonable, albeit somewhat limited, attempt to construct a national mortality estimate using more granular input data.
Scenario 2 uses regional estimates of residential indoor PM2.5 concentrations of indoor origin and ambient infiltration factors recently made using a nationally representative set of combined residential energy and indoor air quality (REIAQ) models for non-smoking US residences [89]. Briefly, the REIAQ model set combined building energy models with dynamic pollutant mass balance models to estimate the hourly concentrations of a number of pollutants of indoor and outdoor origin, including PM2.5, in a total of 3971 individual home models in 19 cities that are estimated to represent ~80% of the US housing stock as of approximately the early 2000s. The model set assumed cooking was the primary indoor PM2.5 source and assumed the same generation rates and cooking frequency for all homes. The model set also accounted for historical outdoor PM2.5 concentrations and modeled infiltration air exchange rates, window opening behaviors, and forced air heating and cooling system runtimes based on historical outdoor environmental conditions combined with a building physics model. We used these modeled results for the regionally varying annual average residential indoor PM2.5 concentrations of indoor origin (i.e., ΔCPM2.5,IG,residences) in conjunction with regional distributions of ambient PM2.5 infiltration factors combined with regional distributions of outdoor PM2.5 concentrations for the year 2012 from EPA [82] to generate estimates of ΔCPM2.5,AG,residences in each of the 19 modeled cities. We used the infiltration factor approach (rather than using values of ΔCPM2.5,AG,residences directly from REIAQ) because the model set is weighted more heavily toward homes in cities with higher ambient PM2.5 concentrations than rural areas, while the EPA outdoor concentration data are more broadly applicable to the rest of the population.
We grouped the REIAQ model outputs for each of the 3971 home models into nine US census divisions and calculated a population-weighted annual average and SD for ΔCPM2.5,IG,residences and infiltration factors (Finf) across all homes in each division (Table 1). We fit beta and lognormal distributions to summary statistics of infiltration factors and indoor PM2.5 concentration of indoor origin, respectively, for Monte Carlo sampling from each division. For PM2.5 of ambient origin, we used annual average (and 10th and 90th percentiles) outdoor PM2.5 concentration data for nine US regions reported by EPA [82]. Because the nine EPA regions group states differently than the nine US census divisions, we regrouped the EPA data by assuming that every state in an EPA region had the same annual outdoor PM2.5 concentration summary statistics as other states in that region. We estimated the annual average (and 10th and 90th percentile) outdoor PM2.5 concentration in each census division by weighting each assumed state-level summary statistic by the population in each census division. We fit lognormal distributions to the resulting estimates of annual outdoor PM2.5 summary statistics (means and 10th and 90th percentiles) in each division for subsequent Monte Carlo sampling.
Table 1 Summary of estimates for key input parameters made for each US census division for the regional analysis in Scenario 2
We then ran the 10,000 iteration Monte Carlo analysis 9 times—one for each census division—with over-35 adult mortality rates and populations [87] (also shown in Table 1) to yield estimates of total mortality and distributions of the different microenvironmental exposure contributions in each division. We summed the median total mortality estimates from each census division to generate an estimate of the national mortality burden associated with total PM2.5 exposure. We estimated the mortality burden attributable to each microenvironment and source type using the average fractional exposure contributions multiplied by the best estimate (i.e., median) total mortality, similar to Scenario 1.
Scenario 1: Nationwide estimates based primarily on prior field studies
The resulting distributions of estimates of the annual US mortality burden of total PM2.5 exposure in 2012 attributable to both indoor and outdoor sources in all microenvironments combined using assumptions in Scenario 1 are shown in Fig. 1. Results for all three RIOPA and MESA sampling approaches were approximately lognormally distributed with a Shapiro–Wilk test statistic (W) > 0.98 and p < 0.00001 on the log-transformed values for each case. We consider the median values as our most likely estimate of the total mortality burden of all PM2.5 exposures for each scenario, with an interquartile range (IQR, or 25th to 75th percentiles) serving as a measure of the most reasonable bounds of the central estimate. The median (IQR) estimate of the total mortality associated with all PM2.5 exposures for each scenario were ~298,200 (198,600–479,500), ~229,400 (171,400–306,700), and ~255,800 (180,600–380,700) deaths for the 100% RIOPA, 100% MESA, and 50%/50% RIOPA/MESA scenarios, respectively. These estimates would mean that aggregate PM2.5 exposures accounted for between 9 and 12% of the total number of adult deaths over the age of 35 in 2012.
Frequency distributions of the total annual US PM2.5 mortality burden estimated by Monte Carlo simulations of microenvironmental exposures to PM2.5 of both indoor and outdoor origin using three cases in Scenario 1, including sampling residential indoor concentrations from: a RIOPA-only, b MESA-only, and c from RIOPA and MESA equally (i.e., 50/50 RIOPA/MESA). The approximate curve fit is a lognormal distribution and summary statistics (median and interquartile range) are provided in units of deaths per year
Distributions of the estimated fractional exposure contributions from indoor and outdoor sources in each microenvironment modeled in Scenario 1 are shown in Fig. 2. In each of the three RIOPA/MESA cases, residential PM2.5 exposure to indoor and outdoor sources combined was the dominant exposure, accounting for 70% of the total PM2.5 exposure across all three scenarios, on average. Residential exposure accounted for an average of ~67% of the total exposure to PM2.5 of outdoor origin across the three scenarios, followed by an average of ~17% of outdoor origin exposure attributed to other indoor environments, with direct outdoor exposure accounting for only ~12% of all outdoor-origin exposure, on average.
Distributions of the estimated contributions of microenvironmental exposures to PM2.5 of indoor and outdoor origin to total PM2.5 exposures across the US population using the three Scenario 1 cases: sampling residential indoor concentrations from a RIOPA-only, b MESA only, and c RIOPA and MESA equally (i.e., 50/50 RIOPA/MESA). Boxes represent 25th and 75th percentile values (i.e., interquartile range, or IQR); horizontal line represents median values; whiskers represent upper and lower adjacent values (i.e., 50% beyond the IQR)
In both the MESA-only and the combined RIOPA/MESA 50/50 scenarios, residential exposure to PM2.5 of outdoor origin dominated total exposure, accounting for an average of 48 and 42% of total exposure in the MESA-only and 50/50 combined scenarios, respectively. Residential exposure to PM2.5 of indoor origin was the second largest contributor to total exposure in these two scenarios, ranging from an average of 19 to 28% of total exposure in the MESA-only and 50/50 combined scenarios, respectively. Conversely, the largest contributor to total exposure in the RIOPA-only scenario was residential exposure to PM2.5 of indoor origin (average of 37%) followed by residential exposure to PM2.5 of outdoor origin (average of 36%). Given the wide ranges of exposure contributions generated by sampling from RIOPA and MESA separately, and given the large differences in the two study designs and findings, we expect the combined 50/50 RIOPA/MESA sampling approach to yield the most plausible nationwide exposure estimates of the three approaches in Scenario 1. Thus, we use only the combined 50/50 RIOPA/MESA study results from Figs. 1 and 2 to estimate the likely mortality burden associated with microenvironmental exposure to PM2.5 of indoor and outdoor origin in Scenario 1 (Table 2).
Table 2 Mean, standard deviation (SD), and interquartile range (IQR: 25th to 75th percentiles) of the estimated contributions of indoor and outdoor sources in each microenvironment to total PM2.5 exposures and the estimated associated US mortality burden in Scenario 1 (50/50 RIOPA/MESA)
We estimate the mortality burden associated with PM2.5 exposure in each microenvironment by multiplying the mean fractional exposure contribution (from Fig. 2) by the median total mortality burden of ~255,800 deaths per year for the combined 50/50 RIOPA/MESA scenario (from Fig. 1). Using this approach, we estimate that exposure to PM2.5 of outdoor origin across all microenvironments accounted for ~160,500 deaths in 2012 (IQR of ~63,300 to ~219,600 deaths), while exposure to PM2.5 of indoor origin across all microenvironments accounted for ~95,300 deaths (IQR of ~13,700 to ~155,400). Our estimate of the mortality burden attributable to outdoor sources is between the ~120,000 and ~200,000 deaths in 2010 estimated by Fann et al. (2017) [42] using RR estimates and response functions from Krewski et al. (2009) [5] and Nasari et al. (2016) [45], respectively. However, our estimate is almost twice the ~88,400 deaths in 2015 estimated by Cohen et al. (2017) [55] largely because of the threshold concentration used (i.e., zero compared to a uniform distribution between 2.4 and 5.8 µg/m3) and also because of the use of a different model form and associated effect estimates that are not modified to account for microenvironmental exposure to outdoor-origin PM2.5. Both issues are explored in more detail in Scenario 3 in the SI.
In the combined 50/50 RIOPA/MESA scenario, we estimate that the largest contributor to PM2.5-associated mortality is residential indoor exposure to PM2.5 of outdoor origin, accounting for an estimated ~107,700 deaths annually (IQR of ~57,800 to ~150,600). The next largest contributor is residential indoor exposure to PM2.5 of indoor origin, accounting for an estimated ~72,000 deaths annually (IQR of ~13,700 to ~122,600). Indoor exposure to PM2.5 of indoor and outdoor origin in other indoor locations is estimated to account for ~23,300 (IQR of ~0 to ~32,800) and ~28,000 (IQR of ~100 to ~43,300) deaths annually, respectively. Finally, outdoor exposure to PM2.5 of outdoor origin is estimated to account for only ~18,800 (IQR of ~4,500 to ~19,100) deaths annually. Overall, these results demonstrate the importance of indoor environments, and particularly residential indoor environments, in governing human exposure to PM2.5 of both indoor and outdoor origin, and provide novel estimates of the potential magnitude of the nationwide mortality burden associated with these exposures.
Table 3 shows estimates of regional and total mortality associated with microenvironmental PM2.5 exposures resulting from the regional model application (Scenario 2). The median (IQR) estimate of the total mortality associated with all PM2.5 exposures across all microenvironments and sources was ~281,800 (159,700–359,300), which places Scenario 2 approximately between the RIOPA-only and 50/50 RIOPA/MESA cases from Scenario 1. Exposure to PM2.5 of outdoor and indoor origin in all microenvironments was estimated to account for ~139,500 deaths (IQR of ~69,600 to ~177,900) and ~142,300 deaths (IQR of ~90,100 to ~181,400) in 2012, respectively. The relative contributions of indoor and outdoor PM2.5 sources to total mortality are approximately equal, largely because of the use of relatively high indoor concentrations (similar to the RIOPA-only approach in Scenario 1) and relatively low residential infiltration factors that were estimated in the REIAQ model set. Accordingly, residential indoor PM2.5 of indoor origin is estimated to be the single dominant contributor to the total mortality burden in Scenario 2, followed by residential indoor PM2.5 of outdoor origin.
Table 3 Estimates of regional and total mortality associated with microenvironmental exposures to PM2.5 of indoor and outdoor origin in 2012 resulting from the regional Monte Carlo procedure (Scenario 2)
Total mortality in Scenario 2 is driven largely by PM2.5 exposures in the most populated census divisions: South Atlantic, East North Central, Middle Atlantic, and Pacific. The East South Central census division had the highest estimated mortality associated with PM2.5 per capita because of relatively high residential indoor concentrations resulting form indoor sources combined with the highest baseline adult mortality rate in 2012. The lowest per capita mortality estimate was in the Mountain census division, with moderate residential indoor PM2.5 concentrations and a moderate baseline mortality rate. Regional differences in ΔCPM2.5,IG,residences were driven variations in air exchange rates [89] and system runtimes (which primarily affects particle filtration [90]).
Best estimates of the total mortality burden associated with PM2.5 exposure in the US made using the assumptions in Scenarios 1 and 2, as well as the contribution of each microenvironmental and source-specific exposure, are shown in Fig. 3 for direct comparison. Although the magnitude of total mortality varies in each scenario, best estimates consistently range from ~230,000 to ~300,000 deaths in 2012. Residential exposures to PM2.5 from indoor sources drive the vast majority of variability in each case, suggesting that a better understanding of the nationwide contribution of indoor sources to total exposure are needed, as is a better understanding of the toxicity of indoor sources.
Best estimates of the number of annual deaths in the US associated with exposure to PM2.5 of indoor and outdoor origin in each microenvironment in Scenarios 1 and 2
One obvious assumption in this work is that the observed relationships between outdoor PM2.5 concentrations and mortality in the epidemiology literature are indeed causal and that the underlying exposure-response functions and effect estimates accurately reflect a causal and quantifiable relationship [91,92,93,94]. Further, the framework assumes that the exposure-response function in Eq. 1 (i) has no threshold PM2.5 concentration below which additional mortality does not occur [83,84,85,86] and (ii) appropriately describes the shape of the observed mortality responses from prior epidemiology studies [95]. Additionally, we do not make any modifications to the exposure-response function and effect estimates based on the magnitude of PM2.5 exposure concentrations or varying chemical constituents of PM2.5, although there is some evidence that both of these adjustments may be warranted [96,97,98,99,100]. Moreover, the framework assumes that there is no double counting of the health effects of indoor PM2.5 sources. We consider this a reasonable assumption because most studies have reported relatively low correlations between personal and ambient PM2.5 concentrations (i.e., R2 < 0.3) [13, 69], but the potential for ambient PM2.5 mortality effect estimates resulting from epidemiology cohort studies including an inherent but un-quantified indoor contribution remains.
Another obvious assumption and potential limitation in this work is that we assume that the modified exposure-response endpoint effect estimates for mortality associated with PM2.5 from both indoor and outdoor sources are the same, and that there are no changes in PM2.5 toxicity that occur due to size-resolved aerosol dynamics that govern the particle infiltration and persistence process. Although some studies have suggested that particles of outdoor origin may be more harmful than indoor-generated particles [59, 60], other studies have shown that indoor-generated fine particulate matter is at least as toxic as outdoor particulate matter [61], if not more [62]. However, there is a tremendous lack of data to support or reject either assumption at this time. Given the lack of data on mortality endpoints from various indoor and outdoor PM2.5 sources, we consider this a reasonable assumption for this exploratory analysis. This same assumption also has precedent in a number of other recent studies in the literature that have evaluated mortality endpoints associated with indoor and outdoor PM2.5 sources [46,47,48]. Additionally, there is mounting evidence from air filter intervention studies in homes that reducing indoor PM2.5 concentrations (comprising a mixture of both indoor and outdoor sources) can lead to improvements in some biomarkers and other clinical measures that are associated with both short-term and long-term cardiovascular health endpoints [101,102,103,104,105,106,107].
There are also several assumptions implicit in our approach to modifying health endpoint effect estimates (β) to account for the underlying exposures to PM2.5 of outdoor origin that likely occurred in the original epidemiology populations from which effect estimates are derived. First, we assumed that the distributions of activity patterns and residential building characteristics (i.e., infiltration factors) that we used match both the general population and the epidemiology cohort populations, although this may not be true. For example, elderly populations who are more susceptible to adverse effects associated with PM2.5 exposure tend to spend more time indoors than the general population. Second, we did not consider some potential non-linear effects of various parameters including potential covariance of infiltration factors and ambient PM2.5 as well as occupancy and indoor particle generation. Third, we assumed that the human activity patterns reported in NHAPS [17] are still valid in 2012, even though data were collected in 1992–1994.
Despite the large uncertainties associated with this work, the exposure attribution and mortality burden estimates clearly demonstrate the importance of considering indoor microenvironments in PM2.5 exposure assessments and epidemiology studies. They also illustrate the potential magnitude and reasonable bounds of the mortality burden potentially associated with microenvironmental exposures to PM2.5 of both indoor and outdoor origin. Results also demonstrate that efforts to reduce the US PM2.5 associated mortality burden should at least consider indoor pollutant control in addition to controlling outdoor sources. This model framework can also be used for high-level policy analysis of the costs and benefits of reducing exposures to PM2.5 of indoor and outdoor origin through various interventions (e.g., source control, air purifiers, changing infiltration/ventilation across the building stock, etc.).
This work intentionally focuses solely on non-smoking homes; further model applications could include incorporating data on smoking rates and contributions to indoor PM2.5 concentrations. This work also highlights the need for several areas of research to improve these estimates and reduce uncertainty. For example, a better understanding of how outdoor PM2.5 infiltration factors vary geographically and by different building types is needed to more accurately characterize outdoor PM2.5 exposures for epidemiology studies. Additionally, a better understanding of the toxicity of both indoor and outdoor origin PM2.5 is needed, including characterizing the toxicity of a wide variety of typical indoor sources and also characterizing how the size-resolved dynamics of the outdoor PM2.5 infiltration process may affect the toxicity of PM2.5 of outdoor origin in indoor environments.
Brook RD, Rajagopalan S, Pope CA, Brook JR, Bhatnagar A, Diez-Roux AV, et al. Particulate matter air pollution and cardiovascular disease. Circulation. 2010;121:2331–78.
Di Q, Wang Y, Zanobetti A, Wang Y, Koutrakis P, Choirat C, et al. Air pollution and mortality in the medicare population. N Engl J Med. 2017;376:2513–22.
Dockery DW, Pope CA 3rd, Xu X, Spengler JD, Ware JH, Fay ME, et al. An association between air pollution and mortality in six U.S. cities. N Engl J Med. 1993;329:1753–9.
Gharibvand L, Shavlik D, Ghamsary M, Beeson WL, Soret S, Knutsen R, et al. The association between ambient fine particulate air pollution and lung cancer incidence: results from the AHSMOG-2 study. Environ Health Perspect. 2017;125:378–84.
Krewski D, Jerrett M, Burnett RT, Ma R, Hughes E, Shi Y, et al. Extended follow-up and spatial analysis of the American Cancer Society study linking particulate air pollution and mortality. Res Rep Health Eff Inst. 2009;5–114; discussion 115–36.
Pope CA, Burnett RT, Thun MJ, Calle EE, Krewski D, Ito K, et al. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. JAMA J Am Med Assoc. 2002;287:1132–41.
Pope CA, Dockery DW. Health effects of fine particulate air pollution: lines that connect. J Air Waste Manag Assoc. 2006;56:709–42.
Shi L, Zanobetti A, Kloog I, Coull BA, Koutrakis P, Melly SJ, et al. Low-concentration PM2.5 and mortality: estimating acute and chronic effects in a population-based study. Environ Health Perspect. 2015;124. https://doi.org/10.1289/ehp.1409111.
US EPA. Integrated science assessment for particulate matter. Research Triangle Park, NC: National Center for Environmental Assessment; 2009.
Brown KW, Sarnat JA, Koutrakis P. Concentrations of PM2.5 mass and components in residential and non-residential indoor microenvironments: The Sources and Composition of Particulate Exposures study. J Expo Sci Environ Epidemiol. 2012;22:161–72.
Burke JM, Zufall MJ, Ozkaynak H. A population exposure model for particulate matter: case study results for PM(2.5) in Philadelphia, PA. J Expo Anal Environ Epidemiol. 2001;11:470–89.
Clayton CA, Perritt RL, Pellizzari ED, Thomas KW, Whitmore RW, Wallace LA, et al. Particle Total Exposure Assessment Methodology (PTEAM) study: distributions of aerosol and elemental concentrations in personal, indoor, and outdoor air samples in a southern California community. J Expo Anal Environ Epidemiol. 1993;3:227–50.
Meng QY, Turpin BJ, Korn L, Weisel CP, Morandi M, Colome S, et al. Influence of ambient (outdoor) sources on residential indoor and personal PM2.5 concentrations: Analyses of RIOPA data. J Expo Anal Environ Epidemiol. 2005;15:17–28.
Van Ryswyk K, Wheeler AJ, Wallace L, Kearney J, You H, Kulka R, et al. Impact of microenvironments and personal activities on personal PM2.5 exposures among asthmatic children. J Expo Sci Environ Epidemiol. 2014;24:260–8.
Wallace L. Indoor particles: a review. J Air Waste Manag Assoc. 1996;46:98–126.
Cao Y, Frey HC. Geographic differences in inter-individual variability of human exposure to fine particulate matter. Atmos Environ. 2011;45:5684–91.
Klepeis NE, Nelson WC, Ott WR, Robinson JP, Tsang AM, Switzer P, et al. The National Human Activity Pattern Survey (NHAPS): a resource for assessing exposure to environmental pollutants. J Expo Anal Environ Epidemiol. 2001;11:231–52.
Spalt EW, Curl CL, Allen RW, Cohen M, Williams K, Hirsch JA, et al. Factors influencing time-location patterns and their impact on estimates of exposure: the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). J Expo Sci Environ Epidemiol. 2016;26:341–8.
Allen RW, Adar SD, Avol E, Cohen M, Curl CL, Larson T, et al. Modeling the residential infiltration of outdoor PM2.5 in the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). Environ Health Perspect. 2012;120:824–30.
Chen C, Zhao B. Review of relationship between indoor and outdoor particles: I/O ratio, infiltration factor and penetration factor. Atmos Environ. 2011;45:275–88.
Long CM, Suh HH, Catalano PJ, Koutrakis P. Using time- and size-resolved particulate data to quantify indoor penetration and deposition behavior. Environ Sci Technol. 2001;35:2089–99.
MacNeill M, Kearney J, Wallace L, Gibson M, Héroux ME, Kuchta J, et al. Quantifying the contribution of ambient and indoor-generated fine particles to indoor air in residential environments. Indoor Air. 2014;24:362–75.
MacNeill M, Wallace L, Kearney J, Allen RW, Van Ryswyk K, Judek S, et al. Factors influencing variability in the infiltration of PM2.5 mass and its components. Atmos Environ. 2012;61:518–32.
Wallace L, Williams R. Use of personal-indoor-outdoor sulfur concentrations to estimate the infiltration factor and outdoor exposure factor for individual homes and persons. Environ Sci Technol. 2005;39:1707–14.
Wallace L. Indoor sources of ultrafine and accumulation mode particles: size distributions, size-resolved concentrations, and source strengths. Aerosol Sci Technol. 2006;40:348–60.
Wallace LA, Emmerich SJ, Howard-Reed C. Source strengths of ultrafine and fine particles due to cooking with a gas stove. Environ Sci Technol. 2004;38:2304–11.
Wallace LA, Mitchell H, O'Connor GT, Neas L, Lippmann M, Kattan M, et al. Particle concentrations in inner-city homes of children with asthma: the effect of smoking, cooking, and outdoor pollution. Environ Health Perspect. 2003;111:1265–72.
Afshari A, Matson U, Ekberg LE. Characterization of indoor sources of fine and ultrafine particles: a study conducted in a full-scale chamber. Indoor Air. 2005;15:141–50.
Ott WR, Siegmann HC. Using multiple continuous fine particle monitors to characterize tobacco, incense, candle, cooking, wood burning, and vehicular sources in indoor, outdoor, and in-transit settings. Atmos Environ. 2006;40:821–43.
He C, Morawska L, Taplin L. Particle emission characteristics of office printers. Environ Sci Technol. 2007;41:6039–45.
He C, Morawska L, Wang H, Jayaratne R, McGarry P, Richard Johnson G, et al. Quantification of the relationship between fuser roller temperature and laser printer emissions. J Aerosol Sci. 2010;41:523–30.
Ferro AR, Kopperud RJ, Hildemann LM. Source strengths for indoor human activities that resuspend particulate matter. Environ Sci Technol. 2004;38:1759–64.
Qian J, Ferro AR. Resuspension of dust particles in a chamber and associated environmental factors. Aerosol Sci Technol. 2008;42:566–78.
Waring MS. Secondary organic aerosol in residences: predicting its fraction of fine particle mass and determinants of formation strength. Indoor Air. 2014;24:376–89.
Hänninen O, Rumrich I, Asikainen A. Challenges in estimating health effects of indoor exposures to outdoor particles: Considerations for regional differences. Sci Total Environ. 2017;589:130–5.
Yeh S, Small MJ. Incorporating exposure models in probabilistic assessment of the risks of premature mortality from particulate matter. J Expo Anal Environ Epidemiol. 2002;12:389–403.
IEc. Health and Welfare Benefits Analyses to Support the Second Section 812 Benefit-Cost Analysis of the Clean Air Act. Cambridge, MA: Industrial Economics, Inc.; 2011.
Fann N, Lamson AD, Anenberg SC, Wesson K, Risley D, Hubbell BJ. Estimating the national public health burden associated with exposure to ambient PM2.5 and ozone. Risk Anal. 2012;32:81–95.
Fann N, Gilmore EA, Walker K. Characterizing the long-term PM 2.5 concentration-response function: comparing the strengths and weaknesses of research synthesis approaches: characterizing long-term PM 2.5 concentration-response function. Risk Anal. 2016;36:1693–707.
Fann N, Fulcher CM, Baker K. The recent and future health burden of air pollution apportioned across U.S. sectors. Environ Sci Technol. 2013;47:3580–9.
Penn SL, Arunachalam S, Woody M, Heiger-Bernays W, Tripodis Y, Levy JI. Estimating state-specific contributions to PM2.5- and O3-related health burden from residential combustion and electricity generating unit emissions in the United States. Environ Health Perspect. 2016;125. https://doi.org/10.1289/EHP550.
Fann N, Kim S-Y, Olives C, Sheppard L. Estimated changes in life expectancy and adult mortality resulting from declining PM2.5 exposures in the contiguous United States: 1980–2010. Environ Health Perspect. 2017;125. https://doi.org/10.1289/EHP507.
Lelieveld J, Barlas C, Giannadaki D, Pozzer A. Model calculated global, regional and megacity premature mortality due to air pollution. Atmos Chem Phys. 2013;13:7023–37.
Anenberg SC, Horowitz LW, Tong DQ, West JJ. An Estimate of the Global Burden of Anthropogenic Ozone and Fine Particulate Matter on Premature Human Mortality Using Atmospheric Modeling. Environ Health Perspect. 2010;118:1189–95.
Nasari MM, Szyszkowicz M, Chen H, Crouse D, Turner MC, Jerrett M, et al. A class of non-linear exposure-response models suitable for health impact assessment applicable to large cohort studies of ambient air pollution. Air Qual Atmosphere Health. 2016;9:961–72.
Chan WR, Parthasarathy S, Fisk WJ, McKone TE. Estimated effect of ventilation and filtration on chronic health risks in U.S. offices, schools, and retail stores. Indoor Air. 2016;26:331–43.
Fisk WJ, Chan WR Effectiveness and cost of reducing particle-related mortality with particle filtration. Indoor Air 2017. https://doi.org/10.1111/ina.12371.
Logue JM, Price PN, Sherman MH, Singer BC. A method to estimate the chronic health impact of air pollutants in U.S. residences. Environ Health Perspect. 2012;120:216–22.
Montgomery JF, Reynolds CCO, Rogak SN, Green SI. Financial implications of modifications to building filtration systems. Build Environ. 2015;85:17–28.
Zhao D, Azimi P, Stephens B. Evaluating the long-term health and economic impacts of central residential air filtration for reducing premature mortality associated with indoor fine particulate matter (PM2.5) of outdoor origin. Int J Environ Res Public Health. 2015;12:8448–79.
Burnett RT, Pope CA III, Ezzati M, Olives C, Lim SS, Mehta S, et al. An integrated risk function for estimating the global burden of disease attributable to ambient fine particulate matter exposure. Environ Health Perspect. 2014. https://doi.org/10.1289/ehp.1307049.
Apte JS, Marshall JD, Cohen AJ, Brauer M. Addressing global mortality from ambient PM2.5. Environ Sci Technol. 2015;49:8057–66.
Lelieveld J, Evans JS, Fnais M, Giannadaki D, Pozzer A. The contribution of outdoor air pollution sources to premature mortality on a global scale. Nature. 2015;525:367–71.
Rohde RA, Muller RA. Air pollution in China: mapping of concentrations and sources. PLoS ONE. 2015;10:e0135749.
Cohen AJ, Brauer M, Burnett R, Anderson HR, Frostad J, Estep, et al. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: an analysis of data from the Global Burden of Diseases Study 2015. Lancet. 2017;389:1907–18.
Chowdhury S, Dey S, Smith KR Ambient PM2.5 exposure and expected premature mortality to 2100 in India under climate change scenarios. Nat Commun 2018; 9. https://doi.org/10.1038/s41467-017-02755-y.
Burnett R, Chen H, Szyszkowicz M, Fann N, Hubbell B, Pope CA, et al. Global estimates of mortality associated with long-term exposure to outdoor fine particulate matter. Proc Natl Acad Sci. 2018;115:9592–9597.
Ebelt ST, Wilson WE, Brauer M. Exposure to ambient and nonambient components of particulate matter: a comparison of health effects. Epidemiology. 2005;16:396–405.
Koenig JQ, Mar TF, Allen RW, Jansen K, Lumley T, Sullivan JH, et al. Pulmonary effects of indoor- and outdoor-generated particles in children with asthma. Environ Health Perspect. 2005;113:499–503.
Monn C, Becker S. Cytotoxicity and induction of proinflammatory cytokines from human monocytes exposed to fine (PM2.5) and coarse particles (PM10–2.5) in outdoor and indoor air. Toxicol Appl Pharmacol. 1999;155:245–52.
Long CM, Suh HH, Kobzik L, Catalano PJ, Ning YY, Koutrakis P. A pilot investigation of the relative toxicity of indoor and outdoor fine particles: in vitro effects of endotoxin and other particulate properties. Environ Health Perspect. 2001;109:1019–26.
Ebelt ST, Petkau AJ, Vedal S, Fisher TV, Brauer M. Exposure of chronic obstructive pulmonary disease patients to particulate matter: relationships between personal and ambient air concentrations. J Air Waste Manag Assoc. 2000;50:1081–94.
Brook RD, Bard RL, Burnett RT, Shin HH, Vette A, Croghan C, et al. Differences in blood pressure and vascular responses associated with ambient fine particulate matter exposures measured at the personal versus community level. Occup Environ Med. 2011;68:224–30.
Adams HS, Nieuwenhuijsen MJ, Colvile RN, McMullen MAS, Khandelwal P. Fine particle (PM2.5) personal exposure levels in transport microenvironments, London, UK. Sci Total Environ. 2001;279:29–44.
Kingham S, Meaton J, Sheard A, Lawrenson O. Assessment of exposure to traffic-related fumes during the journey to work. Transp Res Part Transp Environ. 1998;3:271–4.
Butler DA, Madhavan G, Alper J. Health risks of indoor exposure to particulate matter: workshop summary. Washington, D.C.: National Academies Press; 2016. https://doi.org/10.17226/23531.
Avery CL, Mills KT, Williams R, McGraw KA, Poole C, Smith RL, et al. Estimating error in using residential outdoor PM2.5 concentrations as proxies for personal exposures: a meta-analysis. Environ Health Perspect. 2010;118:673–8.
Avery CL, Mills KT, Williams R, McGraw KA, Poole C, Smith RL, et al. Estimating error in using ambient PM2.5 concentrations as proxies for personal exposures: a review. Epidemiology. 2010;21:215–23.
Baxter LK, Crooks JL, Sacks JD. Influence of exposure differences on city-to-city heterogeneity in PM2.5-mortality associations in US cities. Environ Health 2017;16. https://doi.org/10.1186/s12940-016-0208-y.
Baxter LK, Wright RJ, Paciorek CJ, Laden F, Suh HH, Levy JI. Effects of exposure measurement error in the analysis of health effects from traffic-related air pollution. J Expo Sci Environ Epidemiol. 2010;20:101–11.
Baxter LK, Franklin M, Özkaynak H, Schultz BD, Neas LM. The use of improved exposure factors in the interpretation of fine particulate matter epidemiological results. Air Qual Atmosphere Health. 2013;6:195–204.
Baxter LK, Sacks JD. Clustering cities with similar fine particulate matter exposure characteristics based on residential infiltration and in-vehicle commuting factors. Sci Total Environ. 2014;470–471:631–8.
Hodas N, Meng Q, Lunden MM, Rich DQ, Özkaynak H, Baxter LK, et al. Variability in the fraction of ambient fine particulate matter found indoors and observed heterogeneity in health effect estimates. J Expo Sci Environ Epidemiol. 2012;22:448–54.
Hodas N, Turpin BJ, Lunden MM, Baxter LK, Özkaynak H, Burke J, et al. Refined ambient PM2.5 exposure surrogates and the risk of myocardial infarction. J Expo Sci Environ Epidemiol. 2013;23:573–80.
Hoek G, Krishnan RM, Beelen R, Peters A, Ostro B, Brunekreef B, et al. Long-term air pollution exposure and cardio- respiratory mortality: a review. Environ Health. 2013;12. https://doi.org/10.1186/1476-069X-12-43.
Meng QY, Turpin BJ, Polidori A, Lee JH, Weisel C, Morandi M, et al. PM2.5 of ambient origin: estimates and exposure errors relevant to PM epidemiology. Environ Sci Technol. 2005;39:5105–12.
Sarnat JA, Sarnat SE, Flanders WD, Chang HH, Mulholland J, Baxter L, et al. Spatiotemporally resolved air exchange rate as a modifier of acute air pollution-related morbidity in Atlanta. J Expo Sci Environ Epidemiol. 2013;23:606–15.
Sheppard L, Burnett RT, Szpiro AA, Kim S-Y, Jerrett M, Pope CA, et al. Confounding and exposure measurement error in air pollution epidemiology. Air Qual Atmosphere Health. 2012;5:203–16.
Jones RR, Özkaynak H, Nayak SG, Garcia V, Hwang S-A, Lin S. Associations between summertime ambient pollutants and respiratory morbidity in New York City: comparison of results using ambient concentrations versus predicted exposures. J Expo Sci Environ Epidemiol. 2013;23:616–26.
Mannshardt E, Sucic K, Jiao W, Dominici F, Frey HC, Reich B, et al. Comparing exposure metrics for the effects of fine particulate matter on emergency hospital admissions. J Expo Sci Environ Epidemiol. 2013;23:627–36.
Rackes A, Ben-David T, Waring MS. Outcome-based ventilation: a framework for assessing performance, health, and energy impacts to inform office building ventilation decisions. Indoor Air. 2018. https://doi.org/10.1111/ina.12466.
U.S. EPA. Air trends: particulate matter (PM2.5) trends. 2016. https://www.epa.gov/air-trends/particulate-matter-pm25-trends.
Roman HA, Walker KD, Walsh TL, Conner L, Richmond HM, Hubbell BJ, et al. Expert judgment assessment of the mortality impact of changes in ambient fine particulate matter in the U.S. Environ Sci Technol. 2008;42:2268–74.
Crouse DL, Peters PA, van Donkelaar A, Goldberg MS, Villeneuve PJ, Brion O, et al. Risk of nonaccidental and cardiovascular mortality in relation to long-term exposure to low concentrations of fine particulate matter: a Canadian national-level cohort study. Environ Health Perspect. 2012;120:708–14.
Schwartz J, Coull B, Laden F, Ryan L. The effect of dose and timing of dose on the association between airborne particles and survival. Environ Health Perspect. 2007;116:64–69.
Pinault L, Tjepkema M, Crouse DL, Weichenthal S, van Donkelaar A, Martin RV, et al. Risk estimates of mortality attributed to low concentrations of ambient fine particulate matter in the Canadian community health survey cohort. Environ Health. 2016;15. https://doi.org/10.1186/s12940-016-0111-6.
CDC. National Center for Health Statistics WONDER Online Database: Compressed Mortality File 1999–2016 Series 20, No. 2V. CDC WONDER. 2017. https://wonder.cdc.gov/cmf-icd10.htm (accessed 22 May 2018).
Dockery DW, Spengler JD. Indoor-outdoor relationships of respirable sulfates and particles. Atmos Environ 1967. 1981;15:335–43.
Fazli T, Stephens B. Development of a nationally representative set of combined building energy and indoor air quality models for U.S. residences. Build Environ. 2018;136:198–212.
Touchie MF, Siegel JA. Residential HVAC runtime from smart thermostats: characterization, comparison, and impacts. Indoor Air. 2018. https://doi.org/10.1111/ina.12496.
Cox L. Rethinking the meaning of concentration-response functions and the estimated burden of adverse health effects attributed to exposure concentrations: invited commentary. Risk Anal. 2016;36:1770–9.
Dominici F, Greenstone M, Sunstein CR. Particulate matter matters. Science. 2014;344:257–9.
Frey HC. Dose-response models are conditional on determination of causality: invited commentary. Risk Anal. 2016;36:1751–4.
McClellan RO. Providing context for ambient particulate matter and estimates of attributable mortality: invited commentary. Risk Anal. 2016;36:1755–65.
Pope CA, Cropper M, Coggins J, Cohen A. Health benefits of air pollution abatement policy: Role of the shape of the concentration–response function. J Air Waste Manag Assoc. 2015;65:516–22.
Franklin M, Koutrakis P, Schwartz P. The role of particle composition on the association between PM2.5 and mortality. Epidemiology. 2008;19:680–9.
Ostro B, Lipsett M, Reynolds P, Goldberg D, Hertz A. Long-term exposure to constituents of fine particulate air pollution and mortality: results from the California Teachers Study. Environ Health Perspect. 2010;118:363–9.
Ostro B, Feng W-Y, Broadwin R, Green S, Lipsett M. The effects of components of fine particulate air pollution on mortality in California: results from CALFINE. Environ Health Perspect. 2006;115:13–19.
Shin HH, Cohen AJ, Pope CA, Ezzati M, Lim SS, Hubbell BJ, et al. Meta-analysis methods to estimate the shape and uncertainty in the association between long-term exposure to ambient fine particulate matter and cause-specific mortality over the global concentration range: global particle pollution risks and their uncertainty. Risk Anal. 2016;36:1813–25.
Vodonos A, Awad YA, Schwartz J. The concentration-response between long-term PM 2.5 exposure and mortality; A meta-regression approach. Environ Res. 2018;166:677–89.
Allen RW, Carlsten C, Karlen B, Leckie S, Eeden S, van, Vedal S, et al. An air filter intervention study of endothelial function among healthy adults in a woodsmoke-impacted community. Am J Respir Crit Care Med. 2011;183:1222–30.
Bräuner EV, Forchhammer L, Moller P, Barregard L, Gunnarsen L, Afshari A, et al. Indoor particles affect vascular function in the aged: an air filtration-based intervention study. Am J Respir Crit Care Med. 2008;177:419–25.
Chuang H-C, Ho K-F, Lin L-Y, Chang T-Y, Hong G-B, Ma C-M, et al. Long-term indoor air conditioner filtration and cardiovascular health: A randomized crossover intervention study. Environ Int. 2017;106:91–96.
Kajbafzadeh M, Brauer M, Karlen B, Carlsten C, van Eeden S, Allen RW. The impacts of traffic-related and woodsmoke particulate matter on measures of cardiovascular health: a HEPA filter intervention study. Occup Environ Med. 2015;72:394–400.
Karottki DG, Spilak M, Frederiksen M, Gunnarsen L, Brauner EV, Kolarik B, et al. An indoor air filtration study in homes of elderly: cardiovascular and respiratory effects of exposure to particulate matter. Environ Health. 2013;12. https://doi.org/10.1186/1476-069X-12-116.
Lin L-Y, Chen H-W, Su T-L, Hong G-B, Huang L-C, Chuang K-J. The effects of indoor particle exposure on blood pressure and heart rate among young adults: an air filtration-based intervention study. Atmos Environ. 2011;45:5540–4.
Weichenthal S, Mallach G, Kulka R, Black A, Wheeler A, You H, et al. A randomized double-blind crossover study of indoor air filtration and acute changes in cardiorespiratory health in a First Nations community. Indoor Air. 2013;23:175–84.
This work was supported by the US Environmental Protection Agency, Office of Radiation and Indoor Air, Indoor Environments Division. The authors were also supported in part by an ASHRAE New Investigator Award and in part by the US Environmental Protection Agency under Assistance Agreement No. #83575001 awarded to Illinois Institute of Technology. The views expressed in this document are solely those of the authors and do not necessarily reflect those of the Agency. EPA does not endorse any products or commercial services mentioned in this publication. The authors would like to acknowledge Torkan Fazli for providing outputs from her REIAQ model data set.
This work was supported by the US Environmental Protection Agency, Office of Radiation and Indoor Air, Indoor Environments Division. BS and PA were also supported in part by an ASHRAE New Investigator Award and in part by the US Environmental Protection Agency under Assistance Agreement No. #83575001 awarded to Illinois Institute of Technology. The views expressed in this document are solely those of the authors and do not necessarily reflect those of the Agency. EPA does not endorse any products or commercial services mentioned in this publication.
Department of Civil, Architectural, and Environmental Engineering, Illinois Institute of Technology, Chicago, IL, USA
Parham Azimi & Brent Stephens
Parham Azimi
Brent Stephens
Correspondence to Brent Stephens.
Azimi, P., Stephens, B. A framework for estimating the US mortality burden of fine particulate matter exposure attributable to indoor and outdoor microenvironments. J Expo Sci Environ Epidemiol 30, 271–284 (2020). https://doi.org/10.1038/s41370-018-0103-4
Revised: 25 September 2018
Issue Date: 01 March 2020
criteria pollutants
exposure modeling
inhalation exposure
Portable HEPA filter air cleaner use during pregnancy and children's behavior problem scores: a secondary analysis of the UGAAR randomized controlled trial
Undarmaa Enkhbat
Enkhjargal Gombojav
Ryan W. Allen
Environmental Health (2021)
Individual- and Household-Level Interventions to Reduce Air Pollution Exposures and Health Risks: a Review of the Recent Literature
Prabjit Barn
Indoor Air Quality and Health
Reader's Choice 2020
About the Affiliate
Journal of Exposure Science & Environmental Epidemiology (J Expo Sci Environ Epidemiol) ISSN 1559-064X (online) ISSN 1559-0631 (print) | CommonCrawl |
The Universe of Discourse
Mark Dominus (陶敏修)
[email protected]
12 recent entries
All polynomials of degree 3 or greater factor over the reals
Horrible insurance kerfuffle gone good
A little more about the pedagogy of what it means to be transcendental
Consecutive squareful numbers
In simple English, what does it mean to be transcendental?
What is not portable
I said it was obvious but it was false
Stack Exchange is a good place to explain initial and terminal objects in the category of sets
Annoying Kuratowski pair projection formula
Not the expected examples of nonbinary characters in fiction
History of Science Shitcommenting
One way in which Wiener pairs are simpler than Kuratowski pairs
2022: J
2021: JFMAMJ
JASOND
2005: OND
Alphabetical order in Korean
Mathematical jargon failures
Super-obscure bug in my code
What's the difference between 0/0 and 1/0?
Mathematics 204
Programming 80
Oops 28
Cosmic Call 25
Haskell 22
Etymology 20
Perl 16
Say $dt is a Perl DateTime object.
You are allowed to say
$dt->add( days => 2 )
$dt->subtract( days => 2 )
Today Jeff Boes pointed out that I had written a program that used
$dt->add({ days => 2 })
which as far as I can tell is not documented to work. But it did work. (I wrote it in 2016 and would surely have noticed by now if it hadn't.) Jeff told me he noticed when he copied my code and got a warning. When I tried it, no warning.
It turns out that
$dt->subtract({ days => 2 })
both work, except that:
The subtract call produces a warning (add doesn't! and Jeff had changed my add to subtract)
If you included an end_of_month => $mode parameter in the arguments to subtract, it would get lost.
Also, the working-ness of what I wrote is a lucky fluke. It is undocumented (I think) and works only because of a quirk of the implementation. ->add passes its arguments to DateTime::Duration->new, which passes them to Params::Validate::validate. The latter is documented to accept either form. But its use by DateTime::Duration is an undocumented implementation detail.
->subtract works the same way, except that it does a little bit of preprocessing on the arguments before calling DateTime::Duration->new. That's where the warning comes from, and why end_of_month won't work with the hashref form.
(All this is as of version 1.27. The current version is 1.51. Matthew Horsfall points out that 1.51 does not raise a warning, because of a different change to the same interface.)
This computer stuff is amazingly complicated. I don't know how anyone gets anything done.
[Other articles in category /prog/bug] permanent link
Alphabetical order in Korean has an interesting twist I haven't seen in any other language.
(Perhaps I should mention up front that Korean does not denote words with individual symbols the way Chinese does. It has a 24-letter alphabet, invented in the 15th century.)
Consider the Korean word "문어", which means "octopus". This is made up of five letters ㅁㅜㄴㅇㅓ. The ㅁㅜㄴ are respectively equivalent to English 'm', 'oo' (as in 'moon'), and 'n'. The ㅇis silent, just like 'k' in "knit". The ㅓis a vowel we don't have in English, partway between "saw" and "bud". Confusingly, it is usually rendered in Latin script as 'eo'. (It is the first vowel in "Seoul", for example.) So "문어" is transliterated to Latin script as "muneo", or "munǒ", and approximately pronounced "moon-aw".
But as you see, it's not written as "ㅁㅜㄴㅇㅓ" but as "문어". The letters are grouped into syllables of two or three letters each. (Or, more rarely, four or even five.)
Now consider the word "무해" ("harmless") This word is made of the four letters ㅁㅜㅎㅐ. The first two, as before, are 'm', 'oo'. The ㅎ is 'h' and the 'ㅐ' is a vowel that is something like the vowel in "air", usually rendered in Latin script as 'ae'. So it is written "muhae" and pronounced something like "moo-heh".
ㅎis the last letter of the alphabet. Because ㅎfollows ㄴ, you might think that 무해 would follow 문어. But it does not. In Korean, alphabetization is also done at the syllable level. The syllable 무 comes before 문, because it is a proper prefix, so 무해 comes before 문어. If the syllable break in 문어 were different, causing it to be spelled 무너, it would indeed come before 무해. But it isn't, so it doesn't. ("무너" does not seem to be an actual word, but it appears as a consitutent in words like 무너지다 ("collapse") and 무너뜨리다 ("demolish") which do come before 무해 in the dictionary.)
As far as I know, there is nothing in Korean analogous to the English alphabet song.
Or to alphabet soup! Koreans love soup! And they love the alphabet, so why no hangeul-tang? There is a hundred dollar bill lying on the sidewalk here, waiting to be picked up.
[ Previously, but just barely related: Medieval Chinese typesetting technique. ]
[Other articles in category /lang] permanent link
Last year a new Math Stack Exchange user asked What's the difference between !!\frac00!! and !!\frac10!!?.
I wrote an answer I thought was pretty good, but the question was downvoted and deleted as "not about mathematics". This is bullshit, but what can I do?
I can repatriate my answer here, anyway.
This long answer has two parts. The first one is about the arithmetic, and is fairly simple, and is not very different from the other answers here: neither !!\frac10!! nor !!\frac00!! has any clear meaning. But your intuition is a good one: if one looks at the situation more carefully, !!\frac10!! and !!\frac00!! behave rather differently, and there is more to the story than can be understood just from the arithmetic part. The second half of my answer tries to go into these developments.
The notation !!\frac ab!! has a specific meaning:
The number !!x!! for which $$x\cdot b=a.$$
Usually this is simple enough. There is exactly one number !!x!! for which !!x\cdot 7=21!!, namely !!3!!, so !!\frac{21}7=3!!. There is exactly one number !!x!! for which !!x\cdot 4=7!!, namely !!\frac74!!, so !!\frac74\cdot4=7!!.
But when !!b=0!! we can't keep the promise that is implied by the word "the" in "The number !!x!! for which...". Let's see what goes wrong. When !!b=0!! the definition says:
The number !!x!! for which $$x\cdot 0=a.$$
When !!a\ne 0!! this goes severely wrong. The left-hand side is zero and the right-hand size is not, so there is no number !!x!! that satisfies the condition. Suppose !!x!! is the ugliest gorilla on the dairy farm. But the farm has no gorillas, only cows. Any further questions you have about !!x!! are pointless: is !!x!! a male or female gorilla? Is its fur black or dark gray? Does !!x!! prefer bananas or melons? There is no such !!x!!, so the questions are unanswerable.
When !!a!! and !!b!! are both zero, something different goes wrong:
The number !!x!! for which $$x\cdot 0=0.$$
It still doesn't work to speak of "The number !!x!! for which..." because any !!x!! will work. Now it's like saying that !!x!! is 'the' cow from the dairy farm, But there are many cows, so questions about !!x!! are still pointless, although in a different way: Does !!x!! have spots? I dunno man, what is !!x!!?
Asking about this !!x!!, as an individual number, never makes sense, for one reason or the other, either because there is no such !!x!! at all (!!\frac a0!! when !!a≠0!!) or because the description is not specific enough to tell you anything (!!\frac 00!!).
If you are trying to understand this as a matter of simple arithmetic, using analogies about putting cookies into boxes, this is the best you can do. That is a blunt instrument, and for a finer understanding you need to use more delicate tools. In some contexts, the two situations (!!\frac00!! and !!\frac10!!) are distinguishable, but you need to be more careful.
Suppose !!f!! and !!g!! are some functions of !!x!!, each with definite values for all numbers !!x!!, and in particular !!g(0) = 0!!. We can consider the quantity $$q(x) = \frac{f(x)}{g(x)}$$ and ask what happens to !!q(x)!! when !!x!! gets very close to !!0!!. The quantity !!q(0)!! itself is undefined, because at !!x=0!! the denominator is !!g(0)=0!!. But we can still ask what happens to !!q!! when !!x!! gets close to zero, but before it gets all the way there. It's possible that as !!x!! gets closer and closer to zero, !!q(x)!! might get closer and closer to some particular number, say !!Q!!; we can ask if there is such a number !!Q!!, and if so what it is.
It turns out we can distinguish quite different situations depending on whether the numerator !!f(0)!! is zero or nonzero. When !!f(0)\ne 0!!, we can state decisively that there is no such !!Q!!. For if there were, it would have to satisfy !!Q\cdot 0=f(0)!! which is impossible; !!Q!! would have to be a gorilla on the dairy farm. There are a number of different ways that !!q(x)!! can behave in such cases, when its denominator approaches zero and its numerator does not, but all of the possible behaviors are bad: !!q(x)!! can increase or decrease without bound as !!x!! gets close to zero; or it can do both depending on whether we approach zero from the left or the right; or it can oscillate more and more wildly, but in no case does it do anything like gently and politely approaching a single number !!Q!!.
But if !!f(0) = 0!!, the answer is more complicated, because !!Q!! (if it exists at all) would only need to satisfy !!Q\cdot 0=0!!, which is easy. So there might actually be a !!Q!! that works; it depends on further details of !!f!! and !!g!!, and sometimes there is and sometimes there isn't. For example, when !!f(x) = x^2+2x!! and !!g(x) = x!! then !!q(x) = \frac{x^2+2x}{x}!!. This is still undefined at !!x=0!! but at any other value of !!x!! it is equal to !!x+2!!, and as !!x!! approaches zero, !!q(x)!! slides smoothly in toward !!2!! along the straight line !!x+2!!. When !!x!! is close to (but not equal to) zero, !!q(x)!! is close to (but not equal to) !!2!!; for example when !!x=0.001!! then !!q(x) = \frac{0.002001}{0.001} = 2.001!!, and as !!x!! gets closer to zero !!q(x)!! gets even closer to !!2!!. So the number !!Q!! we were asking about does exist, and is in fact equal to !!2!!. On the other hand if !!f(x) = x!! and !!g(x) = x^2!! then there is still no such !!Q!!.
The details of how this all works, when there is a !!Q!! and when there isn't, and how to find it, are very interesting, and are the basic idea that underpins all of calculus. The calculus part was invented first, but it bothered everyone because although it seemed to work, it depended on an incoherent idea about how division by zero worked. Trying to frame it as a simple matter of putting cookies into boxes was no longer good enough. Getting it properly straightened out was a long process that took around 150 years, but we did eventually get there and now I think we understand the difference between !!\frac10!! and !!\frac00!! pretty well. But to really understand the difference you probably need to use the calculus approach, which may be more delicate than what you are used to. But if you are interested in this question, and you want the full answer, that is definitely the way to go.
[Other articles in category /math] permanent link
A while back I wrote an article about confusing and misleading technical jargon, drawing special attention to botanists' indefensible misuse of the word "berry" and then to the word "henge", which archaeologists use to describe a class of Stonehenge-like structures of which Stonehenge itself is not a member.
I included a discussion of mathematical jargon and generally gave it a good grade, saying:
Nobody hearing the term "cobordism" … will think for an instant that they have any idea what it means … they will be perfectly correct.
But conversely:
The non-mathematician's idea of "line", "ball", and "cube" is not in any way inconsistent with what the mathematician has in mind …
Today I find myself wondering if I gave mathematics too much credit. Some mathematical jargon is pretty bad. Often brought up as an example are the topological notions of "open" and "closed" sets. It sounds as if they should be exclusive and exhaustive — surely a set that is open is not closed, and vice versa? — but no, there are sets that are neither open nor closed and other sets that are both. Really the problem here is entirely with "open". The use of "closed" is completely in line with other mathematical uses of "closed" and "closure". A "closed" object is one that is a fixed point of a closure operator. Topological closure is an example of a closure operator, and topologically closed sets are its fixed points.
(Last month someone asked on Stack Exchange if there was a connection between topological closure and binary operation closure and I was astounded to see a consensus in the comments that there was no relation between them. But given a binary operation !!\oplus!!, we can define an associated closure operator !!\text{cl}_\oplus!! as follows: !!\text{cl}_\oplus(S)!! is the smallest set !!\bar S!! that contains !!S!! and for which !!x,y\in\bar S!! implies !!x\oplus y\in \bar S!!. Then the binary operation !!\oplus!! is said to be "closed on the set !!S!!" precisely if !!S!! is closed with respect to !!\text{cl}_\oplus!!; that is if !!\text{cl}_\oplus(S) = S!!. But I digress.)
Another example of poor nomenclature is "even" and "odd" functions. This is another case where it sounds like the terms ought to form a partition, as they do in the integers, but that is wrong; most functions are neither even nor odd, and there is one function that is both. I think what happened here is that first an "even" polynomial was defined to be a polynomial whose terms all have even exponents (such as !!x^4 - 10x^2 + 1!!) and similarly an "odd" polynomial. This already wasn't great, because most polynomials are neither even nor odd. But it was not too terrible. And at least the meaning is simple and easy to remember. (Also you might like the product of an even and an odd polynomial to be even, as it is for even and odd integers, but it isn't, it's always odd. As far as even-and-oddness is concerned the multiplication of the polynomials is analogous to addition of integers, and to get anything like multiplication you have to compose the polynomials instead.)
And once that step had been taken it was natural to extend the idea from polynomials to functions generally: odd polynomials have the property that !!p(-x) = -p(x)!!, so let's say that an odd function is one with that property. If an odd function is analytic, you can expand it as a Taylor series and the series will have only odd-degree terms even though it isn't a polynomial.
There were two parts to that journey, and each one made some sense by itself, but by the time we got to the end it wasn't so easy to see where we started from. Unfortunate.
I tried a web search for bad mathematics terminology and the top hit was this old blog article by my old friend Walt. (Not you, Walt, another Walt.) Walt suggests that
the worst terminology in all of mathematics may be that of !!G_\delta!! and !!F_\sigma!! sets…
I can certainly get behind that nomination. I have always hated those terms. Not only does it partake of the dubious open-closed terminology I complained of earlier (you'll see why in a moment), but all four letters are abbreviations for words in other languages, and not the same language. A !!G_\delta!! set is one that is a countable intersection of open sets. The !!G!! is short for Gebiet, which is German for an open neighborhood, and the !!\delta!! is for durchschnitt, which is German for set intersection. And on the other side of the Ruhr Valley, an !!F_\sigma!! set, which is a countable union of closed sets, is from French fermé ("closed") and !!\sigma!! for somme (set union). And the terms themselves are completely opaque if you don't keep track of the ingredients of this unwholesome German-French-Greek stew.
This put me in mind of a similarly obscure pair that I always mix up, the type I and type II errors. One if them is when you fail to ignore something insignificant, and the other is when you fail to notice something significant, but I don't remember which is which and I doubt I ever will.
But the one I was thinking about today that kicked all this off is, I think, worse than any of these. It's really shameful, worthy to rank with cucumbers being berries and with Stonhenge not being a henge.
These are all examples of elliptic curves:
These are not:
That's right, ellipses are not elliptic curves, and elliptic curves are not elliptical. I don't know who was responsible for this idiocy, but if I ever meet them I'm going to kick them in the ass.
[ Addendum 20200510: Several people have earnestly explained to me how this terminological disaster came about. Please be assured that I am well aware of the history here. The situation is similar to the one that gave us "even" and "odd" functions: a long chain of steps each of which made some sense individually, but whose concatenation ended in a completely different place. This MathOverflow post has a good summary. ]
[ Addendum 20200510: Mark Badros has solved the "Type I / II" problem for me. They point out that in the story of the Boy Who Cried Wolf, there are two episodes. In the first episode, the boy and the villagers commit a Type I error by reacting to the presence of a wolf when there is none. In the second episode, they commit a Type II error by failing to react to the actual wolf. Thank you! ] | CommonCrawl |
Lieb's Theorem and Maximum Entropy Condensates
Joseph Tindall1, Frank Schlawin2,3, Michael Sentef2, and Dieter Jaksch1,3,4
1Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
2Max Planck Institute for the Structure and Dynamics of Matter, 22761 Hamburg, Germany
3The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, Hamburg, Germany
4Institut für Laserphysik, Universität Hamburg, 22761 Hamburg, Germany
Coherent driving has established itself as a powerful tool for guiding a many-body quantum system into a desirable, coherent non-equilibrium state. A thermodynamically large system will, however, almost always saturate to a featureless infinite temperature state under continuous driving and so the optical manipulation of many-body systems is considered feasible only if a transient, prethermal regime exists, where heating is suppressed. Here we show that, counterintuitively, in a broad class of lattices Floquet heating can actually be an advantageous effect. Specifically, we prove that the maximum entropy steady states which form upon driving the ground state of the Hubbard model on unbalanced bi-partite lattices possess uniform off-diagonal long-range order which remains finite even in the thermodynamic limit. This creation of a `hot' condensate can occur on $\textit{any}$ driven unbalanced lattice and provides an understanding of how heating can, at the macroscopic level, expose and alter the order in a quantum system. We discuss implications for recent experiments observing emergent superconductivity in photoexcited materials.
Featured image: Off-diagonal order in the steady state of the driven Hubbard model versus the `imbalance' of the underlying bi-partite lattice. Data is for the thermodynamic limit. The steady state is formed by driving the ground-state to infinite temperature whilst preserving SU(2) symmetry. The resulting correlations are always uniform with distance (see inset). Notable lattices are listed on the right.
In general, heat is deleterious towards quantum effects. Under certain symmetry constraints, however, this is not true and heating can be used to re-arrange and expose order in a quantum system.
In this article we show how, in a paradigmatic electronic system, the amount of order which results from this process can be directly related to certain geometrical properties of the underlying lattice. This allows us to identify a range of lattice structures where heating can be used to manipulate and manifest quantum order even at the macroscopic level.
We discuss possible experimental realisations of our work and its potential as a novel method for engineering and controlling superconductivity.
@article{Tindall2021liebstheorem, doi = {10.22331/q-2021-12-23-610}, url = {https://doi.org/10.22331/q-2021-12-23-610}, title = {Lieb's {T}heorem and {M}aximum {E}ntropy {C}ondensates}, author = {Tindall, Joseph and Schlawin, Frank and Sentef, Michael and Jaksch, Dieter}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {5}, pages = {610}, month = dec, year = {2021} }
[1] C. J. Gazza, A. E. Trumper, and H. A. Ceccatto. The triangular-lattice Hubbard model: a frustrated highly correlated electron system. Journal of Physics: Condensed Matter, 6 (41): L625–L630, 1994. 10.1088/0953-8984/6/41/001.
https://doi.org/10.1088/0953-8984/6/41/001
[2] W. Hofstetter and D. Vollhardt. Frustration of antiferromagnetism in the $t-t^{`}$-Hubbard model at weak coupling. Annalen der Physik, 510 (1): 48–55, 1998. 10.1002/andp.19985100105.
https://doi.org/10.1002/andp.19985100105
[3] T. Ohashi, T. Momoi, H. Tsunetsugu, and N. Kawakami. Finite temperature Mott transition in Hubbard model on anisotropic triangular lattice. Phys. Rev. Lett., 100: 076402, 2008. 10.1103/PhysRevLett.100.076402.
[4] P. Sahebsara and D. Sénéchal. Hubbard model on the triangular lattice: Spiral order and spin liquid. Phys. Rev. Lett., 100: 136402, 2008. 10.1103/PhysRevLett.100.136402.
[5] H.-Y. Yang, A. M. Läuchli, F. Mila, and K. P. Schmidt. Effective spin model for the spin-liquid phase of the Hubbard model on the triangular lattice. Phys. Rev. Lett., 105: 267204, 2010. 10.1103/PhysRevLett.105.267204.
[6] E. H. Lieb. Two theorems on the Hubbard model. Phys. Rev. Lett., 62: 1201–1204, 1989. 10.1103/PhysRevLett.62.1201.
[7] F. Šimkovic, J. P. F. LeBlanc, A. J. Kim, Y. Deng, N. V. Prokof'ev, B. V. Svistunov, and E. Kozik. Extended crossover from a Fermi liquid to a quasiantiferromagnet in the half-filled 2d Hubbard model. Phys. Rev. Lett., 124: 017003, 2020. 10.1103/PhysRevLett.124.017003.
[8] H. Tasaki. Hubbard model and the origin of ferromagnetism. The European Physical Journal B, 64 (3): 365–372, 2008. 10.1140/epjb/e2008-00113-2.
https://doi.org/10.1140/epjb/e2008-00113-2
[9] H. Tasaki. From Nagaoka's Ferromagnetism to Flat-Band Ferromagnetism and Beyond: An Introduction to Ferromagnetism in the Hubbard Model. Progress of Theoretical Physics, 99 (4): 489–548, 1998. 10.1143/PTP.99.489.
https://doi.org/10.1143/PTP.99.489
[10] N. C. Costa, T. Mendes-Santos, T. Paiva, R. R. dos Santos, and R. T. Scalettar. Ferromagnetism beyond Lieb's theorem. Phys. Rev. B, 94: 155107, 2016a. 10.1103/PhysRevB.94.155107.
https://doi.org/10.1103/PhysRevB.94.155107
[11] T. Kaneko, T. Shirakawa, S. Sorella, and S. Yunoki. Photoinduced ${\eta}$ pairing in the Hubbard model. Phys. Rev. Lett., 122: 077002, 2019. 10.1103/PhysRevLett.122.077002.
[12] M. A. Sentef, A. Tokuno, A. Georges, and C. Kollath. Theory of laser-controlled competing superconducting and charge orders. Phys. Rev. Lett., 118: 087002, 2017. 10.1103/PhysRevLett.118.087002.
[13] M. A. Sentef, A. F. Kemper, A. Georges, and C. Kollath. Theory of light-enhanced phonon-mediated superconductivity. Phys. Rev. B, 93: 144506, 2016. 10.1103/PhysRevB.93.144506.
[14] J. R. Coulthard, S. R. Clark, S. Al-Assam, A. Cavalleri, and D. Jaksch. Enhancement of superexchange pairing in the periodically driven Hubbard model. Phys. Rev. B, 96: 085104, 2017. 10.1103/PhysRevB.96.085104.
[15] Matthew W. Cook and Stephen R. Clark. Controllable finite-momenta dynamical quasicondensation in the periodically driven one-dimensional Fermi-Hubbard model. Phys. Rev. A, 101: 033604, 2020. 10.1103/PhysRevA.101.033604.
[16] R. Fujiuchi, T. Kaneko, K. Sugimoto, S. Yunoki, and Y. Ohta. Superconductivity and charge density wave under a time-dependent periodic field in the one-dimensional attractive Hubbard model. Phys. Rev. B, 101: 235122, 2020. 10.1103/PhysRevB.101.235122.
https://doi.org/10.1103/PhysRevB.101.235122
[17] A. Chandran and S. L. Sondhi. Interaction-stabilized steady states in the driven $o(n)$ model. Phys. Rev. B, 93: 174305, 2016. 10.1103/PhysRevB.93.174305.
[18] C. Rylands, E. B. Rozenbaum, V. Galitski, and R. Konik. Many-body dynamical localization in a kicked Lieb-Liniger gas. Phys. Rev. Lett., 124: 155302, 2020. 10.1103/PhysRevLett.124.155302.
[19] N. Tancogne-Dejean, M. A. Sentef, and A. Rubio. Ultrafast modification of Hubbard $u$ in a strongly correlated material: Ab initio high-harmonic generation in NiO. Phys. Rev. Lett., 121: 097402, 2018. 10.1103/PhysRevLett.121.097402.
[20] T. Ishikawa, Y. Sagae, Y. Naitoh, Y. Kawakami, H. Itoh, K. Yamamoto, K. Yakushi, H. Kishida, T. Sasaki, S. Ishihara, Y. Tanaka, K. Yonemitsu, and S. Iwai. Optical freezing of charge motion in an organic conductor. Nature Communications, 5 (1): 5528, 2014. 10.1038/ncomms6528.
[21] S. Wall, D. Brida, S. R. Clark, H. P. Ehrke, D. Jaksch, A. Ardavan, S. Bonora, H. Uemura, Y. Takahashi, T. Hasegawa, H. Okamoto, G. Cerullo, and A. Cavalleri. Quantum interference between charge excitation paths in a solid-state Mott insulator. Nature Physics, 7 (2): 114–118, 2011. 10.1038/nphys1831.
https://doi.org/10.1038/nphys1831
[22] C. Weeks and M. Franz. Topological insulators on the Lieb and perovskite lattices. Phys. Rev. B, 82: 085310, 2010. 10.1103/PhysRevB.82.085310.
[23] E. H. da Silva Neto, B. Yu, M. Minola, R. Sutarto, E. Schierle, F. Boschini, M. Zonno, M. Bluschke, J. Higgins, Y. Li, G. Yu, E. Weschke, F. He, M. Le Tacon, R. L. Greene, M. Greven, G. A. Sawatzky, B. Keimer, and A. Damascelli. Doping-dependent charge order correlations in electron-doped cuprates. Science Advances, 2 (8), 2016. 10.1126/sciadv.1600782.
[24] N. Katayama, K. Kojima, T. Yamaguchi, S. Hattori, S. Tamura, K. Ohara, S. Kobayashi, K. Sugimoto, Y. Ohta, K. Saitoh, and H. Sawa. Slow dynamics of disordered zigzag chain molecules in layered LiVS$_{2}$ under electron irradiation. npj Quantum Materials, 6 (1): 16, 2021. 10.1038/s41535-021-00313-w.
[25] R. H. McKenzie. A strongly correlated electron model for the layered organic superconductors $\kappa$-(BEDT-TTF)$_{2}$X. arXiv e-prints, art. cond-mat/9802198, 1998. URL https://arxiv.org/abs/cond-mat/9802198.
https://arxiv.org/abs/cond-mat/9802198
[26] M. R. Slot, T. S. Gardenier, P. H. Jacobse, G. C. P. van Miert, S. N. Kempkes, S. J. M. Zevenhuizen, C. M. Smith, D. Vanmaekelbergh, and I. Swart. Experimental realization and characterization of an electronic Lieb lattice. Nature Physics, 13 (7): 672–676, 2017. 10.1038/nphys4105.
[27] R. Drost, T. Ojanen, A. Harju, and P. Liljeroth. Topological states in engineered atomic lattices. Nature Physics, 13 (7): 668–671, 2017. 10.1038/nphys4080.
[28] L. Tapasztó, G. Dobrik, P. Lambin, and L. Biró. Tailoring the atomic structure of graphene nanoribbons by scanning tunnelling microscope lithography. Nature Nanotechnology, 3 (7): 397–401, 2008. 10.1038/nnano.2008.149.
https://doi.org/10.1038/nnano.2008.149
[29] M. Abel, S. Clair, O. Ourdjini, M. Mossoyan, and L. Porte. Single layer of polymeric Fe-Phthalocyanine: An organometallic sheet on metal and thin insulating film. Journal of the American Chemical Society, 133 (5): 1203–1205, 2011. 10.1021/ja108628r.
https://doi.org/10.1021/ja108628r
[30] W. Jiang, S. Zhang, Z. Wang, F. Liu, and T. Low. Topological band engineering of Lieb lattice in Phthalocyanine-based metal–organic frameworks. Nano Letters, 20 (3): 1959–1966, 2020. 10.1021/acs.nanolett.9b05242.
https://doi.org/10.1021/acs.nanolett.9b05242
[31] T. Kambe, R. Sakamoto, K. Hoshiko, K. Takada, J.-H. Miyachi, M.and Ryu, S. Sasaki, J. Kim, K. Nakazato, M. Takata, and H. Nishihara. $\pi$-conjugated Nickel Bis(dithiolene) complex nanosheet. Journal of the American Chemical Society, 135 (7): 2462–2465, 2013. 10.1021/ja312380b.
https://doi.org/10.1021/ja312380b
[32] K. Otsubo and H. Kitagawa. Metal–organic framework thin films with well-controlled growth directions confirmed by x-ray study. APL Materials, 2 (12): 124105, 2014. 10.1063/1.4899295.
[33] C. Wang, L. Chi, A. Ciesielski, and P. Samorì. Chemical synthesis at surfaces with atomic precision: Taming complexity and perfection. Angewandte Chemie International Edition, 58 (52): 18758–18775, 2019a. 10.1002/anie.201906645.
https://doi.org/10.1002/anie.201906645
[34] L. Liu, Y. Sun, X. Cui, K. Qi, X. He, Q. Bao, W. Ma, J. Lu, H. Fang, P. Zhang, L. Zheng, L. Yu, D. J. Singh, Q. Xiong, L. Zhang, and W. Zheng. Bottom-up growth of homogeneous moiré superlattices in bismuth oxychloride spiral nanosheets. Nature Communications, 10 (1): 4472, 2019. 10.1038/s41467-019-12347-7.
[35] L. J. McGilly, A. Kerelsky, N. R. Finney, K. Shapovalov, E.-M. Shih, A. Ghiotto, Y. Zeng, S. L. Moore, W. Wu, Y. Bai, K. Watanabe, T. Taniguchi, M. Stengel, L. Zhou, J. Hone, X. Zhu, D. N. Basov, C. Dean, C. E. Dreyer, and A. N. Pasupathy. Visualization of moiré superlattices. Nature Nanotechnology, 15 (7): 580–584, 2020. 10.1038/s41565-020-0708-3.
[36] G. Abbas, Y. Li, H.e Wang, W.-X. Zhang, C. Wang, and H. Zhang. Recent advances in twisted structures of flatland materials and crafting moiré superlattices. Advanced Functional Materials, 30 (36): 2000878, 2020. 10.1002/adfm.202000878.
https://doi.org/10.1002/adfm.202000878
[37] L. Wang, S. Zihlmann, M.-H. Liu, P. Makk, K. Watanabe, T. Taniguchi, A. Baumgartner, and C. Schönenberger. New generation of moiré superlattices in doubly aligned hBN/graphene/hBN heterostructures. Nano Letters, 19 (4): 2371–2376, 2019b. 10.1021/acs.nanolett.8b05061.
[38] D. M. Kennes, M. Claassen, L. Xian, A. Georges, A. J. Millis, J. Hone, C. R. Dean, D. N. Basov, A. N. Pasupathy, and A. Rubio. Moiré heterostructures as a condensed-matter quantum simulator. Nature Physics, 17 (2): 155–163, 2021. 10.1038/s41567-020-01154-3.
[39] L. Xian, M. Claassen, D. Kiese, M. M. Scherer, S. Trebst, D. M. Kennes, and A. Rubio. Realization of nearly dispersionless bands with strong orbital anisotropy from destructive interference in twisted bilayer Mo$\rm {S}_{2}$. Nature Communications, 12 (1): 5644, 2021. 10.1038/s41467-021-25922-8.
[40] Y. Tang, L. Li, T. Li, Y. Xu, S. Liu, K. Barmak, K. Watanabe, T. Taniguchi, A. H. MacDonald, J. Shan, and K. F. Mak. Simulation of Hubbard model physics in wse2/ws2 moiré superlattices. Nature, 579 (7799): 353–358, 2020. 10.1038/s41586-020-2085-3.
[41] T. Esslinger. Fermi-Hubbard physics with atoms in an optical lattice. Annual Review of Condensed Matter Physics, 1 (1): 129–152, 2010. 10.1146/annurev-conmatphys-070909-104059.
https://doi.org/10.1146/annurev-conmatphys-070909-104059
[42] M. Messer, K. Sandholzer, F. Görg, J. Minguzzi, R. Desbuquois, and T. Esslinger. Floquet dynamics in driven Fermi-Hubbard systems. Phys. Rev. Lett., 121: 233603, 2018. 10.1103/PhysRevLett.121.233603.
[43] F. H. L. Essler, H. Frahm, F. Göhmann, A. Klümper, and V. E. Korepin. The One-Dimensional Hubbard Model. Cambridge University Press, 2005. 10.1017/CBO9780511534843.
[44] L. Campos Venuti, M. Cozzini, P. Buonsante, F. Massel, N. Bray-Ali, and P. Zanardi. Fidelity approach to the Hubbard model. Phys. Rev. B, 78: 115410, 2008. 10.1103/PhysRevB.78.115410.
[45] T. Schäfer, F. Geles, D. Rost, G. Rohringer, E. Arrigoni, K. Held, N. Blümer, M. Aichhorn, and A. Toschi. Fate of the false Mott-Hubbard transition in two dimensions. Phys. Rev. B, 91: 125109, 2015. 10.1103/PhysRevB.91.125109.
[46] J. P. F. LeBlanc and E. Gull. Equation of state of the fermionic two-dimensional Hubbard model. Phys. Rev. B, 88: 155108, 2013. 10.1103/PhysRevB.88.155108.
[47] S.-Q. Shen, Z.-M. Qiu, and G.-S. Tian. Ferrimagnetic long-range order of the Hubbard model. Phys. Rev. Lett., 72: 1280–1282, 1994. 10.1103/PhysRevLett.72.1280.
[48] H. Yoshida and H. Katsura. Rigorous results on the ground state of the attractive $\mathrm{SU}(n)$ Hubbard model. Phys. Rev. Lett., 126: 100201, 2021. 10.1103/PhysRevLett.126.100201.
[49] S. R. White. Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett., 69: 2863–2866, 1992. 10.1103/PhysRevLett.69.2863.
[50] B. Buca, J. Tindall, and D. Jaksch. Non-stationary coherent quantum many-body dynamics through dissipation. Nature Communications, 10 (1): 1730, 2019. 10.1038/s41467-019-09757-y.
https://doi.org/10.1038/s41467-019-09757-y
[51] J. Tindall, B. Buča, J. R. Coulthard, and D. Jaksch. Heating-induced long-range ${\eta}$ pairing in the Hubbard model. Phys. Rev. Lett., 123: 030603, 2019. 10.1103/PhysRevLett.123.030603.
[52] J. Tindall, F. Schlawin, M. A. Sentef, and D. Jaksch. Analytical solution for the steady states of the driven Hubbard model. Phys. Rev. B, 103: 035146, 2021. 10.1103/PhysRevB.103.035146.
[53] L. D'Alessio and M. Rigol. Long-time behavior of isolated periodically driven interacting lattice systems. Phys. Rev. X, 4: 041048, 2014. 10.1103/PhysRevX.4.041048.
[54] P. Pedro, C. Anushya, Z. Papić, and Dmitry A. A. Periodically driven ergodic and many-body localized quantum systems. Annals of Physics, 353: 196–204, 2015. 10.1016/j.aop.2014.11.008.
https://doi.org/10.1016/j.aop.2014.11.008
[55] C. Weeks and M. Franz. Flat bands with nontrivial topology in three dimensions. Phys. Rev. B, 85: 041104(R), 2012. 10.1103/PhysRevB.85.041104.
[56] C. N. Yang. Concept of off-diagonal long-range order and the quantum phases of liquid He and of superconductors. Rev. Mod. Phys., 34: 694–704, 1962. 10.1103/RevModPhys.34.694.
[57] G. L. Sewell. Off-diagonal long-range order and the Meissner effect. Journal of Statistical Physics, 61 (1): 415–422, 1990. 10.1007/BF01013973.
[58] H. T. Nieh, G. Su, and B. H. Zhao. Off-diagonal long-range order: Meissner effect and flux quantization. Phys. Rev. B, 51: 3760–3764, 1995. 10.1103/PhysRevB.51.3760.
https://doi.org/10.1103/PhysRevB.51.3760
[59] T. Leticia and S.-P. Laurent. Quantum simulation of the Hubbard model with ultracold fermions in optical lattices. Comptes Rendus Physique, 19 (6): 365–393, 2018. 10.1016/j.crhy.2018.10.013. Quantum simulation / Simulation quantique.
https://doi.org/10.1016/j.crhy.2018.10.013
[60] M. Theis, G. Thalhammer, K. Winkler, M. Hellwig, G. Ruff, R. Grimm, and J. Hecker Denschlag. Tuning the scattering length with an optically induced Feshbach resonance. Phys. Rev. Lett., 93, 2004. 10.1103/PhysRevLett.93.123001.
[61] S. Taie, H. Ozawa, T. Ichinose, T. Nishio, S. Nakajima, and Y. Takahashi. Coherent driving and freezing of bosonic matter wave in an optical Lieb lattice. Science Advances, 1 (10), 2015. 10.1126/sciadv.1500854.
[62] H. Ozawa, S. Taie, T. Ichinose, and Y. Takahashi. Interaction-driven shift and distortion of a flat band in an optical Lieb lattice. Phys. Rev. Lett., 118: 175301, 2017. 10.1103/PhysRevLett.118.175301.
[63] M. Hyrkäs, V. Apaja, and M. Manninen. Many-particle dynamics of bosons and fermions in quasi-one-dimensional flat-band lattices. Phys. Rev. A, 87: 023614, 2013. 10.1103/PhysRevA.87.023614.
[64] S. Flannigan, L. Madail, R. G. Dias, and A. Daley. Hubbard models and state preparation in an optical Lieb lattice. New Journal of Physics, 2021. 10.1088/1367-2630/abfd01.
https://doi.org/10.1088/1367-2630/abfd01
[65] J. Tindall, F. Schlawin, M. Buzzi, D. Nicoletti, J. R. Coulthard, H. Gao, A. Cavalleri, M. A. Sentef, and D. Jaksch. Dynamical order and superconductivity in a frustrated many-body system. Phys. Rev. Lett., 125: 137001, 2020a. 10.1103/PhysRevLett.125.137001.
[66] F. Peronaci, M. Schiró, and O. Parcollet. Resonant thermalization of periodically driven strongly correlated electrons. Phys. Rev. Lett., 120: 197601, 2018. 10.1103/PhysRevLett.120.197601.
[67] Andreas H., Yuta M.i, Martin E., and Philipp W. Floquet prethermalization in the resonantly driven Hubbard model. EPL (Europhysics Letters), 120 (5): 57001, 2017. 10.1209/0295-5075/120/57001.
[68] A. Mazurenko, C. S. Chiu, G. Ji, M. F. Parsons, M. Kanász-Nagy, R. Schmidt, F. Grusdt, E. Demler, D. Greif, and M. Greiner. A cold-atom Fermi–Hubbard antiferromagnet. Nature, 545 (7655): 462–466, 2017. 10.1038/nature22362.
[69] A. Kantian, A. J. Daley, and P. Zoller. ${\eta}$ condensate of Fermionic atom pairs via adiabatic state preparation. Phys. Rev. Lett., 104: 240406, 2010. 10.1103/PhysRevLett.104.240406.
[70] G. D. Mahan. Many-Particle Physics. Springer, New York, 1981. 10.1007/978-1-4757-5714-9.
https://doi.org/10.1007/978-1-4757-5714-9
[71] R. Anderson, F. Wang, P. Xu, V. Venu, S. Trotzky, F. Chevy, and J. H. Thywissen. Conductivity spectrum of ultracold atoms in an optical lattice. Phys. Rev. Lett., 122: 153602, 2019. 10.1103/PhysRevLett.122.153602.
[72] W. Zhigang, T. Edward, and Z. Eugene. Probing the optical conductivity of trapped charge-neutral quantum gases. EPL (Europhysics Letters), 110 (2): 26002, 2015. 10.1209/0295-5075/110/26002.
[73] A. Tokuno and T. Giamarchi. Spectroscopy for cold atom gases in periodically phase-modulated optical lattices. Phys. Rev. Lett., 106: 205301, 2011. 10.1103/PhysRevLett.106.205301.
[74] N. C. Costa, T. Mendes-Santos, T. Paiva, R. R. dos Santos, and R. T. Scalettar. Ferromagnetism beyond Lieb's theorem. Phys. Rev. B, 94: 155107, 2016b. 10.1103/PhysRevB.94.155107.
[75] J. Tindall, F. Schlawin, M. Buzzi, D. Nicoletti, J. R. Coulthard, H. Gao, A. Cavalleri, M. A. Sentef, and D. Jaksch. Dynamical order and superconductivity in a frustrated many-body system. Phys. Rev. Lett., 125: 137001, 2020b. 10.1103/PhysRevLett.125.137001.
[76] M. Buzzi, D. Nicoletti, M. Fechner, N. Tancogne-Dejean, M. A. Sentef, A. Georges, T. Biesner, E. Uykur, M. Dressel, A. Henderson, T. Siegrist, J. A. Schlueter, K. Miyagawa, K. Kanoda, M.-S. Nam, A. Ardavan, J. Coulthard, J. Tindall, F. Schlawin, D. Jaksch, and A. Cavalleri. Photomolecular high-temperature superconductivity. Phys. Rev. X, 10: 031028, 2020. 10.1103/PhysRevX.10.031028.
https://doi.org/10.1103/PhysRevX.10.031028
[77] M. Mitrano, A. Cantaluppi, D. Nicoletti, S. Kaiser, A. Perucchi, S. Lupi, P. Di Pietro, D. Pontiroli, M. Riccò, S. R. Clark, D. Jaksch, and A. Cavalleri. Possible light-induced superconductivity in K$_3$C$_{60}$ at high temperature. Nature, 530 (7591): 461—464, 2016. 10.1038/nature16522.
[78] W. Hu, S. Kaiser, D. Nicoletti, C. R. Hunt, I. Gierz, M. C. Hoffmann, M. Le Tacon, T. Loew, B. Keimer, and A. Cavalleri. Optically enhanced coherent transport in YBa$_{2}$Cu$_{3}$O$_{6.5}$ by ultrafast redistribution of interlayer coupling. Nature Materials, 13 (7): 705–711, 2014. 10.1038/nmat3963.
https://doi.org/10.1038/nmat3963
[79] D. Nicoletti, E. Casandruc, Y. Laplace, V. Khanna, C. R. Hunt, S. Kaiser, S. S. Dhesi, G. D. Gu, J. P. Hill, and A. Cavalleri. Optically induced superconductivity in striped La$_{2-x}$Ba$_{x}$CuO$_{4}$ by polarization-selective excitation in the near infrared. Phys. Rev. B, 90: 100503(R), 2014. 10.1103/PhysRevB.90.100503.
[80] M. Budden, T. Gebert, M. Buzzi, G. Jotzu, E. Wang, T. Matsuyama, G. Meier, Y. Laplace, D. Pontiroli, M. Riccò, F. Schlawin, D. Jaksch, and A. Cavalleri. Evidence for metastable photo-induced superconductivity in k3c60. Nature Physics, 17 (5): 611–618, 2021. 10.1038/s41567-020-01148-1.
[81] A. Julku, S. Peotta, T. I. Vanhala, D.-H. Kim, and P. Törmä. Geometric origin of superfluidity in the Lieb-lattice flat band. Phys. Rev. Lett., 117: 045303, 2016. 10.1103/PhysRevLett.117.045303.
[82] A. Cavalleri. Photo-induced superconductivity. Contemporary Physics, 59 (1): 31–46, 2018. 10.1080/00107514.2017.1406623.
https://doi.org/10.1080/00107514.2017.1406623
[1] Joseph Tindall, Amy Searle, Abdulla Alhajri, and Dieter Jaksch, "Quantum physics in connected worlds", Nature Communications 13 1, 7445 (2022).
[2] Y. B. Shi, K. L. Zhang, and Z. Song, "Dynamic generation of nonequilibrium superconducting states in a Kitaev chain", Physical Review B 106 18, 184505 (2022).
[3] A. M. Marques and R. G. Dias, "Generalized Lieb's theorem for noninteracting non-Hermitian n -partite tight-binding lattices", Physical Review B 106 20, 205146 (2022).
[4] Masaya Nakagawa, Naoto Tsuji, Norio Kawakami, and Masahito Ueda, "$\eta$ Pairing of Light-Emitting Fermions: Nonequilibrium Pairing Mechanism at High Temperatures", arXiv:2103.13624, (2021). | CommonCrawl |
Potential sex-dependent effects of weather on apparent survival of a high-elevation specialist
Survival fluctuation is linked to precipitation variation during staging in a migratory shorebird
Vojtěch Brlík, Veli-Matti Pakanen, … Kari Koivula
Resilience to climate variation in a spatially structured amphibian population
A. Weinbach, H. Cayuela, … P. Joly
Survival of a long-lived single island endemic, the Raso lark Alauda razae, in relation to age, fluctuating population and rainfall
E. G. Dierickx, R. A. Robinson & M. de L. Brooke
Hatching phenology is lagging behind an advancing snowmelt pattern in a high-alpine bird
Christian Schano, Carole Niffenegger, … Fränzi Korner-Nievergelt
CeutaOPEN, individual-based field observations of breeding snowy plovers Charadrius nivosus
Luke J. Eberhart-Phillips, Medardo Cruz-López, … Clemens Küpper
Interactive influences of fluctuations of main food resources and climate change on long-term population decline of Tengmalm's owls in the boreal forest
Marek Kouba, Luděk Bartoš, … Erkki Korpimäki
Introduced species and extreme weather as key drivers of reproductive output in three sympatric albatrosses
Jaimie B. Cleeland, Deborah Pardo, … Mark A. Hindell
How climate change and wildlife management affect population structure in wild boars
Sebastian G. Vetter, Zsófia Puskas, … Thomas Ruf
Extreme events are more likely to affect the breeding success of lesser kestrels than average climate change
J. Marcelino, J. P. Silva, … I. Catry
Eliseo Strinella1,
Davide Scridel2,3,
Mattia Brambilla ORCID: orcid.org/0000-0002-7643-46522,4,
Christian Schano5,6 &
Fränzi Korner-Nievergelt ORCID: orcid.org/0000-0001-9081-35635
Climate-change ecology
Population dynamics
Mountain ecosystems are inhabited by highly specialised and endemic species which are particularly susceptible to climatic changes. However, the mechanisms by which climate change affects species population dynamics are still largely unknown, particularly for mountain birds. We investigated how weather variables correlate with survival or movement of the white-winged snowfinch Montifringilla nivalis, a specialist of high-elevation habitat. We analysed a 15-year (2003–2017) mark-recapture data set of 671 individuals from the Apennines (Italy), using mark-recapture models. Mark-recapture data allow estimating, forgiven time intervals, the probability that individuals stay in the study area and survive, the so called apparent survival. We estimated annual apparent survival to be around 0.44–0.54 for males and around 0.51–0.64 for females. Variance among years was high (range: 0.2–0.8), particularly for females. Apparent survival was lower in winter compared to summer. Female annual apparent survival was negatively correlated with warm and dry summers, whereas in males these weather variables only weakly correlated with apparent survival. Remarkably, the average apparent survival measured in this study was lower than expected. We suggest that the low apparent survival may be due to recent changes in the environment caused by global warming. Possible, non-exclusive mechanisms that potentially also could explain sexual differential apparent survival act via differential breeding dispersal, hyperthermia, weather-dependent food availability, and weather-dependent trade-off between reproduction and self-maintenance. These results improve our current understanding of the mechanisms driving population dynamics in high-elevation specialist birds, which are particularly at risk due to climate change.
Mountain ecosystems are recognised as global biodiversity hotspots, hosting highly specialized and endemic species1,2,3 which are threatened by human-induced causes including climate change4,5,6,7,8,9. Mountain regions are particularly susceptible to climatic alterations and are experiencing a faster rate of warming compared to the global average. Indeed, the European Alps have warmed about 2 °C in the past 100 years, with the largest increase occurring in the last three decades4,5,6. In parallel to changes in temperature, the frequency of extreme weather events is also increasing10, potentially enforcing detrimental effects of climate warming on organisms11.
Extreme environments, such as the alpine and nival belts of mountains, are often inhabited by highly specialized species that are adapted to local conditions12. Conditions at high elevations are characterised by low average temperature, strong winds, intensive sun radiation, low oxygen pressure, and a high temporal and spatial variation in temperature. Extremely warm temperatures (>25 °C in the European Alps) can be followed by cold temperatures and even snow storms within minutes. Species inhabiting these variable environments must show a high physiological and behavioural flexibility to cope with sudden abiotic changes within short periods of time, while they also need to be able to persevere with long-lasting inclement weather periods. Organisms being specialised to extreme environments may be vulnerable to changes in their habitats and climate for the following reasons. They may already live at the edge of their physiological niche, and even small shifts in one environmental or climatic factor may render an area unsuitable13. Their ecological niche may be narrow. Therefore, they may not be flexible enough to adapt their behaviour, ecology or life-history traits rapidly enough to cope with long-term and directed changes in the environment and climate12,14,15. At last, many alpine species have a limited distributional range: the loss of a few populations increases extinction risk of the species and consequently represents a threat to global biodiversity16.
In birds, the adaptations for living in alpine zones may be as manifold as there are species17, or even populations. Nevertheless, meta-analyses showed that populations at higher elevations have lower fecundity (number of breeding attempts and clutch size) but slightly heavier nestlings and higher juvenile survival compared to their conspecifics at low elevations (e.g.18,19,20). With regard to adult survival, we would expect that alpine species compensate the risk of unpredictable conditions during the reproductive season with a longer life span19,21. A long life span is characteristic for some alpine bird species (e.g. white-tailed ptarmigan Lagopus leucurus in the alpine zone of the Rocky Mountains compared to populations in the sub-alpine zone and Arctic22; an alpine subspecies of horned lark Eremophila alpestris compared to a lowland subspecies23). However, a long life span does not seem to be a universal characteristic for species living at high elevations18,20,24,25 and various calls have been made to improve basic knowledge on demographic parameters for the mountain bird community25. Improving the knowledge of demographic parameters, such as survival and reproduction, of a variety of different mountain bird species would be a crucial step for understanding how life-history traits of mountain birds are shaped by their extreme environment, and consequently understand the needs and vulnerability of their populations.
We studied apparent survival of a high-elevation bird species, the European subspecies of the white-winged snowfinch Montifringilla nivalis nivalis (hereafter snowfinch). It breeds in southern European mountains, exclusively above the treeline. In the Alps, the species has lost parts of its former distribution and population density decreased during the last decades26,27,28,29. There is evidence that global warming may be an important cause of this population decline: a comparison across species showed a correlation between thermal niche and changes in distribution ranges in Italy. The distribution of cold-adapted species, including the snowfinch, generally shrunk during the last 30 years, whereas species of warm habitats expanded their distribution30. Further, both distribution models31,32 and fine-scaled habitat selection studies33,34 suggested that the snowfinch is highly dependent on climate sensitive habitats (i.e. snow patches and short alpine grassland) and therefore it is potentially threatened by global warming.
The specific aims of this study are threefold. First, we estimate annual apparent survival for adult males, adult females and juveniles in order to fill a knowledge gap in the life-history of this high-elevation specialist in a southern part of its European distribution. Second, we assess the role of summer and winter temperatures as well as precipitation on males' and females' annual apparent survival. Third, we describe how apparent survival changes over the annual cycle in order to identify periods with increased mortality, i.e. key information to better understand the factors driving annual apparent survival. The findings of this study will improve the understanding of the mechanisms underlying demographic trends and life history traits for a poorly studied group of species adapted to extreme, dynamic and globally changing environments.
Annual recapture probability and apparent survival
We analysed the data using a fully Bayesian approach. Our conclusions are based on the posterior distributions of the model parameters. In order to obtain posterior distributions by probability theory we had to make assumptions about the natural process that generated our data. We explored how different assumptions affected the results by using seven different models (Table 1) fitted to two different data subsets. From the posterior distributions of the model parameters we reported the median as a point estimate and the 2.5 and 97.5% quantiles as lower and upper limits of the compatibility interval35, which we abbreviated with CI. We tried to avoid drawing dichotomous conclusions, but discussed effect sizes while acknowledging various sources of uncertainty36.
Table 1 List of models used. In the brackets after \(\varPhi \) the model for apparent survival probability is specified and in the brackets after p the model for recapture probability.
We fitted seven different models once to the full data set, and once to a reduced data set with only individuals of which the sex was known and including only recaptures after the capture at which sex was first identified (see Methods). All models included separate apparent survival for age and sex classes, but they differed in the temporal structure for apparent survival. The simplest model (1) assumed constant annual apparent survival over the years. The most complex one (23b) included a specific apparent survival for the year after the first capture, random year effects for each sex separately, and linear effects of summer and winter temperature. We compared the performance of the models by predictive model checking37,38. Thereby, we specifically checked for transients (i.e. proportion of individuals captured only once39). Furthermore, we compared the number of individuals captured at least three times between the model prediction and the observed data. A lack of fit in the number of individuals captured at least three times would indicate either an under- or overestimation of apparent survival, or heterogeneity of capture probability (e.g. specific trap responses of some individuals40).
All models fitted to the reduced data set adequately predicted the number of individuals captured exactly once and the number of individuals captured at least three times (Table S1). The three models that accounted for transients (2b, 3b, and 23b) performed best. For the full data set, generally the models adequately predicted the observations for the individuals with known sex, except when including four environmental variables as predictors for apparent survival (summer and winter temperature and precipitation, model 4). However, for the individuals with unknown sex, only models accounting for transients (models 2b, 3b, and 23b) did reasonably well, though not perfect. These models predicted between 304 and 381 individuals that were captured only once, whereas the data contained 389 individuals captured once (Table S1). When including different effects of the environmental variables on apparent survival in the year after the first capture and later (interaction first capture x environmental variables), at least one of the estimated coefficients was highly uncertain (95% CI included the range between −1 and +1, which means that both strong negative as well as strong positive correlations were compatible with the data).
Parameter estimates were consistent among all models. If not otherwise stated, we presented the results from the model including summer and winter temperature as predictors for apparent survival as well as allowing for a sex specific among-year variance and accounting for transients (model 23b). We presented the results for both data sets. Results for all models fitted to both data sets are reported in Table S2.
Average recapture probability was similar between males and females, but varied strongly among years. Recapture probabilities ranged between 0.1 and 0.8 both in the full and reduced data set (average: 0.4). Estimated recapture probabilities for each year and sex were consistent between the models and the data sets (Pearson's correlations among estimated recapture probabilities of different models were between 0.78 and 0.97).
Apparent annual survival estimates were between 0.09 and 0.16 for nestlings and first year birds (Table 2). First year apparent survival may be negatively correlated with summer temperature (estimate: −0.76, CI: −2.22, 0.39, Fig. 1). Correlation with winter temperature was unclear (−0.16, CI: −1.24, 1.02).
Table 2 Annual apparent survival estimates for individuals ringed as nestlings, for first year birds (juveniles), adult males and adult females as estimated by different models fitted to the full and reduced data.
Annual apparent survival estimates for first year birds in relation to summer temperature. Circles are medians of posterior distributions obtained by model 2b, vertical bars connect the 2.5% and 97.5% quantiles of the posterior distributions (95% compatibility intervals). The regression line is based on model 3b. Grey shaded area is the 95% compatibility interval of the regression line. For juveniles, we cannot distinguish between first year after first capture and later years, because later they are adults. Horizontal dotted line is the mean of the prior distribution.
For adults, annual apparent survival was between 0.26 and 0.28 for males and between 0.33 and 0.38 for females during the first year after the first capture (Table 2). During later years, apparent survival was between 0.44 and 0.54 for males and between 0.51 and 0.64 for females. Apparent survival was slightly but consistently higher for females compared to males. Females showed a larger among-year variance in apparent annual survival (standard deviation among years in full data: 1.41 (CI: 0.27, 3.16) for females, and 0.43 (CI: 0.02, 1.34) for males; in reduced data: 0.87 (CI: 0.05, 2.92) for females, and 0.48 (CI: 0.02, 1.68) for males, taken from the model not accounting for temperature, model 2b).
When including both precipitation and temperature as predictors for apparent survival (model 4), posterior distributions of the model coefficients became broad. The clearest correlations were a negative one between female apparent survival and summer temperature (−0.85, CI: −2.09, 0.22) in the full data set, and a positive correlation (1.24, CI: −0.29, 3.22) between female apparent survival and summer precipitation in the reduced data set. In both data sets, summer temperature was negatively and summer precipitation positively correlated with female apparent survival (Table S2). However, CIs were so broad that we cannot clearly conclude that both variables independent of the other correlate strongly with female apparent survival. Further, summer temperature and precipitation were negatively correlated (Pearson's correlation coefficient −0.39). Therefore, we present the correlation between summer temperature and apparent survival from models that only include temperature as predictor for apparent survival keeping in mind that warm temperatures also mean dry summers (Fig. 2). In both data sets, we found clear negative correlations between summer temperature and apparent survival of females during their first year after first capture (full data: −1.12, CI: −2.53, −0.08; reduced data: −1.07, CI: −3.05, −0.16), whereas for males, this correlation does not seem to be so strong (full data: 0.03, CI: −0.55, 0.67; reduced data: −0.15, CI: −0.70, 0.42). For later years, the CI of the correlation between apparent survival and summer temperature included both strong positive and strong negative values. When assuming that the effect of temperature does not differ between the first and later years after first capture, the correlation between temperature and female apparent survival was negative (model 3a full data: −0.78, CI: −1.75, 0.03, reduced data: −0.70, CI: −1.61, 0.07; when accounting for additional among year variance (model 23b) full data: −0.85, CI: −2.18, 0.42; reduced data: −0.72, CI: −1.98, 0.62). For males, the correlation between summer temperature and apparent survival was probably only weak (model 3a full data: 0.18, CI: −0.47, 0.92, full data: −0.17, CI: −0.75, 0.42; accounting for additional among year variance (model 23b) full data: 0.05, CI: −0.79, 0.98; reduced data: −0.18, CI: −1.11, 0.68). The posterior probability of the hypothesis that female apparent survival shows a stronger negative correlation with summer temperature than males is 0.90 based on the full data set and 0.79 in the reduced data set (model 23b). When only looking at apparent survival during the first year after the first capture, females clearly show a stronger negative correlation with summer temperature (posterior probability 0.97 in the full data and 0.95 in the reduced data).
Annual apparent survival estimates for adult females and males against mean summer (months June to September) temperature based on the full data set (upper panels, all data and accounting for individuals with unknown sex within the model) and the reduced data set (lower panels, only including data of individuals with known sex and only including capture and recapture occasions after their sex has been identified). Open circles and white regression line are apparent survival estimates in the year after first capture, filled circles and solid regression line relate to later years. Shaded area and broken lines indicate 95% compatibility intervals of the regression lines, vertical bars of the annual apparent survival estimates. Dotted horizontal line corresponds to the mean of the prior.
Correlations of winter temperature with apparent survival were generally less clear, but a positive correlation for females was evident when assuming that winter temperature affects apparent survival during the first year after first capture similarly as during later years and not allowing for additional among year variance (model 3 full data: 0.99, CI: 0.02, 2.66, reduced data: 0.44, CI: −0.26, 1.36).
Seasonal recapture probability and apparent survival
Four-month recapture probability was highest during the breeding season (full data 0.18, CI: 0.09, 0.46 similar for males and females, reduced data 0.23, CI: 0.13, 0.41 for males and 0.2,2 CI: 0.11, 0.40 for females). Between August and March, four-month recapture probability varied between 0.03 and 0.12.
Apparent seasonal survival estimates were similar between the full and reduced data set. For adult males apparent survival was high from breeding to winter and clearly lower from winter to breeding (Fig. 3). For females, already autumn survival was lower than during summer and it stayed low for the winter (Fig. 3).
Seasonal (4-months) apparent survival estimates of adult males (blue), females (orange), and first year birds (grey). Circles are based on the full data set, squares are based on the reduced data set. Given are medians of the posterior distributions, vertical bars are 95% compatibility intervals. Grey horizontal line indicates the median of the prior distribution Beta(3.6, 1.2). Deviations of the estimates from this median indicate information in the data. Winter: December–March; breeding: April–July; summer: August–November.
Of first year birds, a proportion of 0.70 (CI: 0.41, 0.97) survived and stayed in the study area until summer and of those 0.35 (CI: 0.22, 0.55) survived and stayed until their first winter. Thus, a proportion of 0.24 (CI: 0.14, 0.39) of first year birds ringed during the breeding time were still alive in the study area the following winter. The estimate of apparent survival of juveniles from winter to the next breeding season showed large uncertainty. However, given that apparent annual survival of first year birds was around 0.10–0.15, we can expect that a proportion of around 0.5 of those individuals alive and present in winter will survive and stay in the study area until the next breeding season.
The strong among-year variance in annual recapture probabilities may reflect the strong among-year variance in snowfinch breeding and spatial behaviour driven by strong variance in weather and food conditions typical of high elevation environment41. The capture effort, measured as the number of field days, was fairly constant across years. However, capture effort was much higher in summer compared to winter. Additionally, during the breeding season, snowfinch spatial behaviour is easier to predict because they are involved in reproduction, explaining the higher capture probability during the breeding season compared to the rest of the year.
The average apparent annual adult survival estimated in this study for snowfinches based on mark-recapture data from 671 individuals in the Central Apennines was around 0.50 for males and between 0.51 and 0.64 for females. Such apparent survival estimates seems to be lower compared to earlier similar measures for snowfinches in the Eastern Alps. Lindner (2002)42 reported that, out of 24 breeding birds, 14 (a proportion of 0.58) returned in the next breeding season. From 482 birds ringed in the Austrian Alps during the years 1973–1994 by A. Aichhorn, 52 were recaptured later43. The mean age of these recaptured birds was 4.4 years (oldest bird was 14 years), and 12 out of 52 birds were at least 6 years old when they were recaptured. In our data, none out of 138 recaptured birds was older than 6 years. Thus, the annual adult apparent survival measured in this study is very likely substantially lower than it has been in the Austrian Alps 30 years earlier. Also, a comparison with the phylogenetically related, but 30% smaller, house sparrow Passer domesticus suggests that we could expect a higher apparent survival than the one we measured. Based on a mark-recapture data set from Norway, Holand et al.44 estimated an apparent annual survival between 0.6 and 0.7 for the house sparrow. According to allometric relationships we would expect that the snowfinch has a higher survival compared to the house sparrow45,46.
Our estimate of annual apparent survival may be lower than expected or measured elsewhere because of methodological or ecological reasons. In our analyses, we might not have accounted for all capture heterogeneity. Capture heterogeneity is present when groups of individuals have a different probability of being captured. Not accounting for capture heterogeneity in a mark-recapture model can lead to an underestimation of survival47, e.g. if weak individuals are captured with a higher probability. More interestingly, snowfinches in our study area may show a lower apparent survival than those in the Alps because, in the Central Apennines, dispersal rates are higher or the average life span is shorter. Reasons for this difference could be unfavourable environmental conditions, or local adaptations of life-history characteristics.
Our models accounted for differences in capture probability and apparent survival between age and sex classes, first year birds, adult males and adult females, as well as among years and seasons. We did not account for potential differences between different age classes among adults, because exact age was only known for a few individuals ringed as nestlings, nor did we relate capture probability to body condition. However, bias produced by mist-nets capturing weak birds with a higher probability than strong birds must have occurred also 30 years earlier in Austria. Therefore, we do not think that the difference between our estimate of apparent annual adult survival and those for Austrian snowfinches 30 years earlier can be explained by unaccounted heterogeneity in capture probability alone.
Low apparent survival could have resulted from local adaptations of the life-history traits in snowfinch populations of the Apennines (e.g.19). Because of the more southern latitudes of the Apennines compared to the Alps, the summer seasons always were warmer and longer compared to the Alps, probably providing more time and better conditions for the broods. Average clutch size may be slightly higher in the Apennines (mean 4.4, range 3–5, n = 4848); compared to the Alps (mean 3.9 eggs, range 2–6, n = 33, own unpublished data). Additionally, the proportion of second broods may be higher when the season is longer but no data on the proportions of second broods is available. It may be that snowfinch populations in the Apennines invest more energy in reproduction than in survival as an adaptation to local conditions. Alternatively, snowfinches in the Apennines may naturally disperse more often after breeding compared to snowfinches in the Alps. Indeed, the lower apparent survival of adults during the first year after first capture compared to later indicates that parts of the individuals captured at the study sites are not staying in the study area. However, even after having accounted for such transient individuals in our models, apparent survival estimates were still unexpectedly low. Maybe snowfinches in the Central Apennines regularly disperse also after having stayed for some years, e.g. after having experienced low breeding success49,50. Further, breeding dispersal in birds is generally higher in females compared to males51,52,53, leading to a lower apparent survival in females compared to males. We cannot see lower apparent survival in females compared to males in our data (Table 2). Therefore, either the snowfinches in the Central Apennines show breeding dispersal patterns not typical for birds, or breeding dispersal may be low and the apparent survival estimates presented here for the second year and later after first capture may be close to true survival. To what extent snowfinches in the Central Apennines perform breeding dispersal clearly needs further investigations.
Obviously, local conditions in the Apennines have changed dramatically during the last decades: mean annual temperature increased by 2 °C within the last 60 years and snow precipitation decreased by 50% during the last decade in our study area54. Thermophilic and nutrient-demanding plant species became more abundant, whereas cold-tolerant plant species declined in the Apennines during the last 42 years54,55,56. Consequently, quantity and quality of seed availability (main snowfinch food, exclusively in winter) and accessibility of ground living insects (important nestling food) have presumably changed during the last decades. Such changes in food availability have the potential to negatively affect survival and/or positively affect breeding dispersal behaviour due to low breeding success. Therefore, our results may complement the many studies that showed population declines of mountain birds due to habitat loss induced by climate change9,57,58. The low adult apparent survival found in this study may indicate that, for the snowfinch, climate-induced population declines may act, beside other mechanisms, via reduced survival of adults or increased emigration.
Its strong among-year variance suggests that female apparent survival is dependent on weather. Indeed, snowfinch female apparent survival was much lower in years with warm and/or dry summers, but less so in males. We further showed that adult apparent survival is lower during winter compared to summer. Therefore, we would expect that weather conditions during winter are more important in defining annual apparent survival compared to summer weather conditions. However, at least for females, summer conditions showed a stronger correlation with annual apparent survival than winter weather conditions.
Compared to males, snowfinche females are slightly smaller (1% in body mass, 5% in wing length59). The eggs are exclusively incubated by females60. Both parents feed the young but females presumably more intensively than males, e.g., as observed in the house sparrow61. Overall, in snowfinches reproductive investment seems to be substantially higher for females compared to males. Warm and dry summers may have direct or indirect effects on apparent survival potentially differently in male and female snowfinches via 1) hyperthermia, 2) food availability and accessibility in winter, 3) trade-off between reproduction and self-maintenance.
First, hot and dry weather conditions can cause physiological problems due to dehydration or hyperthermia. Birds adapted to living in cold climate seem to be particularly at risk to hyperthermia. For example, in ptarmigans (Lagopus muta and L. leucurus), body temperature and evaporative water loss increased at temperatures above 30 °C62,63. In direct sunlight ptarmigans actively seek shelter from sun even at much lower temperatures (i.e., above 21 °C64). High temperatures can cause direct mortality through hyperthermia and dehydration or reduce the time for foraging and maintenance because of the need for seeking shelter65 and therefore indirectly increase mortality. Heat stress may affect females more strongly than males, because of their different roles in the rearing of the brood.
Second, the main food of adult snowfinches is represented by wildflower seeds, particularly in winter66,67. During warm and dry summers the seed production of wild flowers can be lower compared to cool and wet summers (e.g. Campanula thyrsoides68). Therefore, summer conditions may affect food availability in the following winter and thus may affect survival or dispersal in winter. During the winter, snowfinches usually forage in flocks where individuals compete when food is scarce69. In case of competition, males may dominate over females, and therefore food shortage may affect females more severely than males70. For the Alpine cough Pyrrhocorax graculus that inhabits similar habitat to the snowfinch, Chiffard et al.71 recently also hypothesised that food shortage could lead to lower survival in females compared to males due to competition. Further, males are slightly larger than females. A larger body size may be of advantage for persevering with food shortage, or when access to food is more difficult because of the snow layer preventing or impeding access to seeds.
Third, warm summers may increase reproductive effort either by allowing second broods or by an increase in effort needed to raise a brood. During years with medium to early snow melt, an unknown proportion of breeding pairs lay a second clutch60,72,73, but when snow melt is late, snowfinches can even skip breeding41. Therefore, in warm and dry summers, we expect a higher proportion of breeding pairs raising two broods. On the other hand, nestling food availability may be reduced during warm and dry summers because snow patches vanish quickly33. Along the edges of melting snow patches, snowfinches forage for Tipulidae larvae that constitute the most important food for their nestlings41. Broods raised in close proximity to melting snow patches have higher breeding success compared to broods without snow patches in close vicinity41. A lack of melting snow patches during the rearing period (mid May to mid August73) may therefore imply a higher effort of the parents, and/or a reduction of breeding success. Breeding dispersal is normally increased after the brood failed52. Both mechanisms, increasing the number of second clutches or deteriorating breeding conditions, may lead to a higher energy investment in reproduction at the cost of allocating energy to self-maintenance which is paid by lower survival74,75. An increase in the number of clutches or a reduction of nestling food availability may affect energy expenditure more in females than in males, because the energy invested in the brood may be higher for the former, and/or because the proportion of non-breeders may be higher among males compared to females.
To summarize, we currently do not know why female annual apparent survival is negatively affected by warm and dry summer conditions. However, our results indicate that weather potentially affects apparent survival of males and females differently, which may be either via differences in direct physiological effects, via food resources or via the balance of energy allocation to reproduction and self-maintenance.
Under future climate change scenarios for the Mediterranean region, summers are projected to become warmer and drier76, which, according to this study, could potentially lead to an increase in snowfinch female dispersal and/or a decrease in female survival. It remains uncertain whether reproductive output can be increased to compensate for a reduced survival or whether immigration from the Alps or the Pyrenees may compensate for an increased emigration. We do not expect an increase in reproduction in the future, because extreme weather events are predicted to become more frequent due to climatic change10, and therefore, the risk of losing a brood due to stochastic events may also increase. How strongly the populations in the Apennines are genetically connected to the ones in other mountain regions is topic of current research projects.
There is general evidence that negative population trends of cold adapted species are due to habitat loss caused by global warming77,78. Climate change induced habitat loss is also expected31 and has already been observed26,28 for the snowfinch. The expected decrease in female apparent survival with global warming constitutes an additional threat to this species making its future look critical. Similar threats may potentially also affect other cold-adapted species. The different response in apparent survival to climatic variables between the sexes shown in our study indicates that the mechanisms by which climate change impacts on the species demography may be complex. High quality data on demographic parameters (including breeding success, natal and breeding dispersal) from different populations of different species living at high elevations are urgently needed in order to take effective measures for counteracting the negative population trends9,79,80.
Study site and the capture-recapture data set
From June 2003 to June 2017, 671 snowfinches were caught in the Apennines, within the Gran Sasso and Monti della Laga National Park, Italy, specifically within an area of 3 km2 around Campo Imperatore (42°27 N, 13°34 E, 2200 m asl, see48). Birds were captured all year round, using mist nets and nest traps (Table 3). Number of days with snowfinch capturing ranged between 41 and 55 per year. On average, 48 field days took place between April and October, and 4 between November and March. The positioning and length of nets used for trapping, and the time spent trapping per day could not be standardised because of the highly variable spatial behaviour of the birds and the variable weather conditions.
Table 3 Monthly distribution of the first captures (total 671 individuals), the proportions of these birds recaptured at least once later, and the monthly distribution of the 211 recaptures between June 2003 and June 2017 (total 15 years).
Snowfinches were marked with individual metal rings and, if possible, their age and sex were identified according to Strinella (2013)59. Of the 671 individuals captured, 101 were marked as nestlings and 570 as fully grown individuals. Almost a quarter of the individuals (157 individuals) were identified as males, 104 as females, whereas for 410 individuals (61%) sex could not be identified (Table 4). Of the 671 marked individuals, 138 were later recaptured between 1 and 6 times.
Table 4 Number of individuals captured in each year of the study period depicted for adult males, adult females and individuals of which the sex was not identified (mostly first year birds).
Bird capturing and marking was authorised by the Institute for Environmental Protection and Research ISPRA (ES, licence CNI ISPRA no. 0114). Capturing and marking were carried out in accordance with guidelines and regulations of ISPRA.
We obtained data on daily minimum and maximum temperatures (°C) and precipitation (mm per day) from two local weather stations (Assergi: 42°24′N 13°30′E, 992 m asl; and Castel del Monte: 2°22′N 13°43′E, 1346 m asl; Ufficio Idrografico e Mareografico Regione Abruzzo) for the years 2003 to 2017. Daily minimum and maximum temperature were highly correlated (Pearson's correlation r = 0.93). We used the average between the minimum and maximum temperature of both stations as a measure of average daily temperature that is sensitive to extreme temperature values. Daily precipitation was summed over the two stations in order to obtain a measure of precipitation in the study area. We then averaged daily temperature and precipitation over the summer months (June to September) and over the winter months (November to March) for each year. These four weather variables were used to predict annual apparent survival (from summer to summer).
Precipitation in winter correlated positively with precipitation in summer (Pearson's correlation r = 0.50). In summer, temperature correlated negatively with precipitation (r = −0.39). Weak positive correlations existed between winter temperature and precipitation in summer (r = 0.27), and precipitation in winter (r = 0.15), respectively. All other correlations were weaker than 0.1. Over the course of the study period, average summer temperature did not show any trend, whereas average winter temperature showed a weak positive trend (Fig. 4).
Average summer and winter temperature for each year of the study period. Summer temperature is the average temperature for the months June to September, winter temperature is the average between November and March.
We did not consider weather variables during the breeding season because most birds were captured during or shortly after the breeding season (Table 3). Consequently, the length an individual is exposed to spring conditions during its first year after marking depends on the date of marking. We only included weather variables that could unambiguously be assigned to one summer to summer interval.
Survival models
General model structure
We used mark-recapture models81,82,83,84 that we applied to two different temporal aggregations of the mark-recapture data set. The first analysis aimed at measuring average annual apparent survival, and investigating correlations between weather variables and annual apparent survival. In the second analysis, we described seasonal patterns of apparent survival probabilities. The general model structures in both analyses were equal but they differed in the length of the time intervals (years vs. 4-months periods) and the predictors for survival (see below). For the first analysis, we aggregated the data in annual time intervals (1st January – 31st December; mean capture date within this interval is 30th June). For the second analyses, four-month time intervals were used. For the annual data, time interval \(t\) was one year (of 15 years in total), and for the seasonal data, time interval \(t\) was four months (of 43 4-months periods or "seasons" in total).
The observations \({y}_{it}\), an indicator of whether individual \(i\) was recaptured during time interval \(t\), were modelled as a Bernoulli variable conditional on the latent state of the individual birds \({z}_{it}\) (0 = dead or permanently emigrated, 1 = alive and at the study site). The probability \(P({y}_{it}\mathrm{=1)}\) is the product of the probability that an alive individual is recaptured, \({p}_{it}\), and the state of the bird \({z}_{it}\). Thus, a dead or permanently emigrated bird cannot be recaptured, whereas for a bird alive during time interval \(t\) the recapture probability equals \({p}_{it}\):
$${y}_{it} \sim Bernoulli({z}_{it}{p}_{it})$$
The latent state variable \({z}_{it}\) is a Markovian variable with the state at time \(t\) being dependent on the state at time \(t-1\) and the apparent survival probability \({\Phi }_{it}\):
$${z}_{it} \sim Bernoulli({z}_{it-1}{\Phi }_{it})$$
We use the term "apparent survival" to indicate that the parameter \(\Phi \) is a product of site fidelity and survival. Thus, individuals that permanently emigrated from the study area cannot be distinguished from dead individuals.
In both models, the parameters \(\Phi \) and \(p\) were modelled as sex-specific. However, for 61% of the individuals, sex could not be identified, i.e. sex was missing. Ignoring the individuals with missing sex would most likely lead to a bias because they were not missing at random. The probability that sex can be identified is increasing with age and most likely differs between sexes. Further, in our data, the probability that sex could be identified varied across the study period because different methods (genetics, plumage, breeding patch) were used in different years, and sex identification literature became available during the study period59. As a consequence, we cannot use our data to estimate the sex-specific probability of identifying the sex of an individual85. However, we can include the missing sexes using a mixture model structure similarly to Pledger (2000)86 who introduced a mixture model for unknown classes. In our case, for part of the individuals, the class (sex) was known. We imputed the sex assignment for non-identified individuals using a categorical distribution with a uniform \(Beta\mathrm{(1,1)}\) distribution for the probability of being a male \({q}_{i}\mathrm{[1]}\):
$$Se{x}_{i} \sim Categorical({{\bf{q}}}_{i})$$
where, for every non-identified individual, \({{\bf{q}}}_{i}\) is a vector of length 2, containing the probability of being a male and a female, respectively. The sex of each non-identified individual was therefore assumed to be male or female with probability \({q}_{i}\mathrm{[1]}\) and \({q}_{i}\mathrm{[2]}=1-{q}_{i}\mathrm{[1]}\), respectively. A uniform distribution between 0 and 1 was assumed for \({q}_{i}\mathrm{[1]}\). In this way, no specific sex was assigned to these individuals, but their data was used for the survival estimates preventing them to be overestimated. Indeed, the posterior distributions of the \({q}_{i}\mathrm{[1]}\) were close to a uniform distribution in all models. Therefore, we do not present them in the results.
In addition, we fitted all models without the mixture structure to a reduced data set including only individuals with identified sex and only the re-captures after their sex could first be ascertained87. Except for 5 individuals, all individuals were adult when their sex was ascertained. These 5 individuals were excluded from the analyses on the reduced data set. In such a reduced data set, individuals that show clear sex-specific characteristics and that are strong enough to live long will be over-represented. Consequently, the results may not be representative for the snowfinch population in the Apennines. On the other hand, also the full data set may not be a random sample of individuals because inexpert or high active individuals are more likely to be captured by mist-nets than experienced or less active individuals88,89. Therefore, we present the results from the analyses of both the full and reduced data sets.
Annual apparent survival models
We used seven different models for annual apparent survival that differed in their temporal structure of apparent survival (Table 1). In the first model, we assumed constant apparent survival over time, but included different apparent survival for age and sex classes (3 levels: first year birds, adult males and adult females):
Model 1: \({\Phi }_{it}={a}_{t,age\mathrm{}.sex[it]}\)
In the second model, we included a sex-specific random year effect
Model 2a: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it]}+{\gamma }_{sex[i]t}\) with \({\gamma }_{sex[i]t} \sim Normal\mathrm{(0,}\sigma )\).
The third model is similar to model 2a but it includes for each age and sex class a separate apparent survival for the first year after first capture (first occasion). It thus estimates for both sexes two adult apparent survival, one during the first year after the first capture and one during the second and later years after the first capture. Because juveniles become adults after one year, the models include only one apparent survival for juveniles.
Model 2b: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it],firstoccasion[it]}+{\gamma }_{sex[i]t}\) with \({\gamma }_{sex[i]t} \sim Normal\mathrm{(0,}\sigma )\), where the variable firstoccasion contains a 1 for the first occasion and a 2 for later occasions.
In the following four models, we modelled annual apparent survival to be linearly related to average summer and average winter temperature (summertemp, wintertemp, models 3a, 3b, 23b, and 4). In the last model (model 4), we also included precipitation (summerprec, winterprec) as predictors. We estimated different effects of temperature and precipitation on apparent survival for juveniles, adult males and adult females:
Model 3a: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it]}+a{1}_{age\mathrm{}.sex[it]}summertem{p}_{t}+a{2}_{age\mathrm{}.sex[it]}wintertem{p}_{t}\)
Model 3b was similar to model 3a but included separate apparent survival and separate correlations between temperature and apparent survival during the first year after first capture and during the second or later years after the first capture.
Model 3b: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it],firstoccasion[it]}+a{1}_{age\mathrm{}.sex[it],firstoccasion[it]}summertem{p}_{t}+a{2}_{age\mathrm{}.sex[it],firstoccasion[it]}\) \(wintertem{p}_{t}\)
Model23b combines the random year structure of model 2, the linear relationship with summer and winter temperature of model 3, and it also includes separate apparent survival for the first and later years after the first capture. However, in model 3b the correlations with temperature variables separately for first and later years after the first captures could not be estimated well (low sample size). Therefore, in model 23b we estimated only one correlation between apparent survival and each of the temperature variables and assumed that this correlation was the same for first and later years after the first capture.
Model 23b: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it],firstoccasion[it]}+a{1}_{age\mathrm{}.sex[it]}summertem{p}_{t}+a{2}_{age\mathrm{}.sex[it]}wintertem{p}_{t}+{\gamma }_{sex[i]t}\) with \({\gamma }_{sex[i]t} \sim Normal\mathrm{(0,}\sigma )\)
In the last model, we included summer and winter temperature and summer and winter precipitation as predictors for apparent survival.
Model 4: \(logit({\Phi }_{it})=a{0}_{age\mathrm{}.sex[it]}+a{1}_{age\mathrm{}.sex[it]}summertem{p}_{t}+a{2}_{age\mathrm{}.sex[it]}wintertem{p}_{t}+a{3}_{age\mathrm{}.sex[it]}summerpre{c}_{t}+a{4}_{age\mathrm{}.sex[it]}winterpre{c}_{t}\)
In all models, annual recapture probability was modelled for each year and sex independently: \({p}_{it}=b{0}_{t,sex[it]}\). Because all individuals were at least one year old when they can be recaptured for the first time, we did not include age as a predictor for recapture probability.
Uniform prior distributions were used for all parameters with a parameter space limited to values between 0 and 1 for probabilities. A normal distribution with a mean of 0 and a standard deviation of 1.5 was used for the intercept \(a0\), and for \(a1\), \(a2\), \(a3\), and \(a4\) a standard deviation of 3 was used.
Seasonal survival model
We assumed that four-month survival differed between age and sex classes (juveniles, adult male, adult female), and seasons (winter: December – March, breeding: April – July, summer: August – November), \({\Phi }_{it}={a}_{sex\mathrm{}.age[i],season[t]}\). Independent, and slightly informative prior distributions \({a}_{sex\mathrm{}.age[i],season[t]} \sim Beta\mathrm{(3.6,1.2)}\) were used. This prior gives 95% of the mass to values between 0.33 and 0.99 and has a median of 0.79. An average survival of 0.79 over 4 months corresponds to an annual survival of 0.49. By choosing a prior distribution with a mean corresponding to approximately the overall mean of the data we make sure that estimates for specific seasons deviating from the overall mean show information that is inherent to the data. Using a uniform prior, \(Beta\mathrm{(1,1)}\), with a mean of 0.5 would result in estimates close to 0.5 for seasons with a small sample size, i.e. during winter, which would bias the conclusions on seasonal differences in seasonal survival. Recapture probability was assumed to depend on season, sex and year using the logit link function and assigning a normal distribution to the year effects:
$$logit({p}_{it})=b{0}_{season[t],sex[i]}+{\gamma }_{y}ear[t]\,\mathrm{where}\,{\gamma }_{y}ear[t] \sim Normal\mathrm{(0,}\sigma )$$
Independent normal prior distributions were specified for the average logit-transformed recapture probabilities,
$$b{0}_{season[t],sex[i]} \sim Normal\mathrm{(0,1.5)}.$$
Model fitting and predictive model checking
We used Hamiltonian Monte Carlo as implemented in Stan90 to fit the models to the data. We simulated 4 Markov chains of length 2000 and used the second half of each chain for the description of the posterior distributions of the model parameters.
Convergence and mixing of the Markov chains were assessed by the metrics and diagnostic plots provided by rstan91 and shinystan92 packages, i.e. no divergent transition, number of effective samples above 1000, Monte Carlo errors below 10%, and R-hat value below 1.01.
In order to assess the goodness of fit, we used R 3.6.193 to simulate from the model 1000 times new capture histories for each individual in the data. For every draw we used another set of parameter values from the simulated joint posterior distribution of the model parameters (that was generated by Hamiltonian Monte Carlo in Stan, as described above). These 1000 new data sets look like the model "thinks" the data should look like38. For every new data set, we extracted the number of individuals captured exactly once and the number of individuals captured at least three times. We compared these two statistics between the 1000 new data sets and the observed data.
Data archived on Dryad (https://doi.org/10.5061/dryad.6wwpzgmtt).
Dirnböck, T., Essl, F. & Rabitsch, W. Disproportional risk for habitat loss of high-altitude endemic species under climate change. Global Change Biology 17, 990–996, https://doi.org/10.1111/j.1365-2486.2010.02266.x (2011).
Körner, C. & Ohsawa, M. Mountain systems. In Hassan, R., Scholes, R. & Ash, N. (eds.) Ecosystem and Human Well-being: Current State and Trends, 681–716 (Island Press, Washington, 2006).
Myers, N., Mittermeier, R. A., Mittermeier, C. G., da Fonseca, G. A. & Kent, J. Biodiversity hotspots for conservation priorities. Nature 403, 853–858, https://doi.org/10.1038/35002501 (2000).
Auer, I. et al. HISTALP—historical instrumental climatological surface time series of the Greater Alpine Region. International Journal of Climatology 27, 17–46, https://doi.org/10.1002/joc.1377 (2007).
Böhm, R. et al. Regional temperature variability in the European Alps: 1760–1998 from homogenized instrumental time series. International Journal of Climatology 21, 1779–1801, https://doi.org/10.1002/joc.689 (2001).
Brunetti, M. et al. Climate variability and change in the Greater Alpine Region over the last two centuries based on multi-variable analysis. International Journal of Climatology 29, 2197–2225, https://doi.org/10.1002/joc.1857 (2009).
Lehikoinen, A. et al. Declining population trends of European mountain birds. Global Change Biology 25, 577–588, https://doi.org/10.1111/gcb.14522 (2019).
Article ADS PubMed Google Scholar
Pepin, N. et al. Elevation-dependent warming in mountain regions of the world. Nature Climate Change 5, 424–430 (2015).
Scridel, D. et al. A review and meta-analysis of the effects of climate change on Holarctic mountain and upland bird populations. Ibis 25, 263, https://doi.org/10.1111/ibi.12585 (2018).
Dieffenbaugh, N. S. et al. Quantifying the influence of global warming on unprecedented extreme climate events. Proceedings of the National Academy of Sciences of the United States of America 114, 4881–4886, https://doi.org/10.1073/pnas.1618082114 (2017).
Chapin, S. F. & Körner, C. Arctic and alpine biodiversity: Patterns, causes and ecosystem consequences. Trends in Ecology & Evolution (TREE) 9, 45–47, https://doi.org/10.1016/0169-5347(94)90266-6 (1994).
Cheviron, Z. A. & Brumfield, R. T. Genomic insights into adaptation to high-altitude environments. Heredity 108, 354–361, https://doi.org/10.1038/hdy.2011.85 (2012).
Tingley, M. W., Monahan, W. B., Beissinger, S. R. & Moritz, C. Birds track their Grinnellian niche through a century of climate change. Proceedings of the National Academy of Sciences USA 106, 19637–19643 (2009).
Barve, S., Dhondt, A. A., Mathur, V. B. & Cheviron, Z. A. Life-history characteristics influence physiological strategies to cope with hypoxia in Himalayan birds. Proc. R. Soc. B 283, https://doi.org/10.1098/rspb.2016.2201 (2016).
Martin, K. & Wiebe, K. L. Coping mechanisms of alpine and arctic breeding birds: Extreme weather and limitations to reproductive resilience. Integr. Comp. Biol. 44, 177–185 (2004).
La Sorte, F. A. & Jetz, W. Projected range contractions of montane biodiversity under global warming. Proc. R. Soc. B 277, 3401–3410, https://doi.org/10.1098/rspb.2010.0612 (2010).
Potapov, R. L. Adaptation of birds to life in high mountains in Euroasia. Acta Zoologica Sinica 50, 970–977 (2004).
Badyaev, A. V. & Ghalambor, C. K. Evolution of life histories along elevational gradients: trade-off between parental care and fecundity. Ecology 82, 2948–2960 (2001).
Bears, H., Martin, K. & White, G. C. Breeding in high-elevation habitat results in shift to slower life-history strategy within a single species. Journal of Animal Ecology 78, 363–375 (2009).
Boyle, A. W., Sandercock, B. K. & Martin, K. Patterns and drivers of intraspecific variation in avian life history along elevational gradients: a meta-analysis. Biological Reviews of the Cambridge Philosophical Society 91, 469–482, https://doi.org/10.1111/brv.12180 (2016).
Tavecchia, G. et al. Temporal variation in annual survival probability of the Eurasian woodcock Scolopax rusticola wintering in France. Wildlife Biology 8, 21–30 (2002).
Sandercock, B. K., Martin, K. & Hannon, S. J. Demographic consequences of age-structure in extreme environments: population models for arctic and alpine ptarmigan. Oecologia 146, 13–24, https://doi.org/10.1007/s00442-005-0174-5 (2005).
Camfield, A. F., Pearson, S. F. & Martin, K. Life history variation between high and low elevation subspecies of horned larks Eremophila spp. Journal of Avian Biology 41, 273–281 (2010).
Sandercock, B. K., Martin, K. & Hannon, S. J. Life history strategies in extreme environments: Comparative demography of arctic and alpine ptarmigan. Ecology 86, 2176–2186 (2005).
Hille, S. M. & Cooper, C. B. Elevational trends in life histories: Revising the pace-of-life framework. Biological Reviews of the Cambridge Philosophical Society 90, 204–213, https://doi.org/10.1111/brv.12106 (2015).
Issa, N. & Muller, Y. Atlas des oiseaux de France métropolitaine: Nidification et présence hivernale (Delachaux et Niestlé, Paris, 2015).
Kilzer, R., Willi, G. & Kilzer, G. Atlas der Brutvögel Vorarlbergs (Bucher, Hohenems, 2011).
Knaus, P. et al. Schweizer Brutvogelatlas 2013 - 2016: Verbreitung und Bestandsentwicklung der Vögel in der Schweiz und im Fürstentum Lichtenstein (Schweizerische Vogelwarte, Sempach, 2018).
Nardelli, R. et al. Rapporto sull'applicazione della Direttiva 147/ 2009/CE in Italia: dimensione, distribuzione e trend delle popolazioni di uccelli (2008–2012).
Scridel, D. et al. Thermal niche predicts recent changes in range size for bird species. Climate Research 73, 207–216, https://doi.org/10.3354/cr01477 (2017).
Brambilla, M., Pedrini, P., Rolando, A. & Chamberlain, D. E. Climate change will increase the potential conflict between skiing and high-elevation bird species in the Alps. Journal of Biogeography 43, 2299–2309 (2016).
Brambilla, M. et al. A spatially explicit definition of conservation priorities according to population resistance and resilience, species importance and level of threat in a changing climate. Diversity and Distributions 7, 853, https://doi.org/10.1111/ddi.12572 (2017).
Brambilla, M. et al. Foraging habitat selection by Alpine white-winged snowfinches Montifringilla nivalis during the nestling rearing period. Journal of Ornithology 158, 277–286 (2017).
Brambilla, M. et al. Past and future impact of climate change on foraging habitat suitability in a high-alpine bird species: Management options to buffer against global warming effects. Biological conservation 221, 209–218, https://doi.org/10.1016/j.biocon.2018.03.008 (2018).
Gelman, A. & Greenland, S. Are confidence intervals better termed uncertainty intervals? BMJ (Clinical research ed.) 366, l5381, https://doi.org/10.1136/bmj.l5381 (2019).
Wasserstein, R. L., Schirm, A. L. & Lazar, N. A. Moving to a world beyond " p < 0.05". The American Statistician 73, 1–19, https://doi.org/10.1080/00031305.2019.1583913 (2019).
Article MathSciNet Google Scholar
Gelman, A., Meng, X.-L. & Stern, H. Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica 6, 733–807 (1996).
Chambert, T., Rotella, J. J. & Higgs, M. D. Use of posterior predictive checks as an inferential tool for investigating individual heterogeneity in animal population vital rates. Ecology and Evolution 4, 1389–1397 (2014).
Pradel, R., Hines, J. E., Lebreton, J. D. & Nichols, J. D. Estimating survival rate and proportion of transients using capture-recapture data from open population. Biometrics 53, 88–99 (1997).
Nichols, J. D., Hines, J. E. & Pollock, K. H. Effects of permanent trap response in capture probability on Jolly-Seber Capture-Recapture model estimates. The Journal of Wildlife Management 48, 289, https://doi.org/10.2307/3808491 (1984).
Heiniger, P. H. Anpassungsstrategien des Schneefinken (Montifringilla nivalis) an die extremen Umweltbedingungen des Hochgebirges. Ph.D. thesis, Berne, University, Bern (01.01.1988).
Lindner, R. Der Schneefink (Montifringilla nivalis) ein unbekanntes Charaktertier der Alpinzone des Nationalparks Hohe Tauern. Report, Haus der Natur, Museum für angewandte und darstellende Naturkunde, Salzburg (2002).
Markl, A. Zur Biologie des Schneefinken Montifringilla nivalis L. Sein ausgezeichnetes Flugvermögen und sein Verhalten als Strich-, Stand- und Zugvogel. Ornithologische und unterrichtswissenschaftliche Verarbeitung. Lehramtsprüfung für Hauptschulen, Hausarbeit, Pädagogische Akademie des Bundes, Salzburg (1995).
Holand, H. et al. Lower survival probability of house sparrows severely infected by the gapeworm parasite. Journal of Avian Biology 45, 365–373, https://doi.org/10.1111/jav.00354 (2014).
Blueweiss, L. et al. Relationships between body size and some life history parameters. Oecologia 37, 257–272, https://doi.org/10.1007/BF00344996 (1978).
Dillingham, P. W. & Fletcher, D. Estimating the ability of birds to sustain additional human-caused mortalities using a simple decision rule and allometric relationships. Biological Conservation 141, 1783–1792 (2008).
Fletcher, D. et al. Bias in estimation of adult survival and asymptotic population growth rate caused by undetected capture heterogeneity. Methods in Ecology and Evolution 3, 206–216 (2012).
Strinella, E., Cantoni, C., de Faveri, A. & Artese, C. Biometrics, sexing and moulting of snow finch Montifringilla nivalis in central italy. Ringing & Migration 26, 1–8 (2011).
Greenwood, P. J. & Harvey, P. H. The natal and breeding dispersal of birds. Annual Review of Ecology and Systematics 13, 1–21 (1982).
Harts, A. M., Jaatinen, K. & Kokko, H. Evolution of natal and breeding dispersal: When is a territory an asset worth protecting? Behavioral Ecology 27, 287–294, https://doi.org/10.1093/beheco/arv148 (2016).
Forero, M. G., Donázar, J. A., Blas, J. & Hiraldo, F. Causes and consequences of territory change and breeding dispersal in the black kite. Ecology 80, 1298–1310, 10.1890/0012-9658(1999)080[1298:CACOTC]2.0.CO;2 (1999).
Schaub, M. & Hirschheydt, J. V. Effect of current reproduction on apparent survival, breeding dispersal, and future reproduction in Barn swallows assessed by multistate capture-recapture models. Journal of Anmial Ecology 78, 625–635 (2009).
Végvári, Z. et al. Sex-biased breeding dispersal is predicted by social environment in birds. Ecology and Evolution 8, 6483–6491, https://doi.org/10.1002/ece3.4095 (2018).
Petriccione, B. & Bricca, A. Thirty years of ecological research at the Gran Sasso d'Italia LTER site: Climate change in action. Nature Conservation 34, 9–39, https://doi.org/10.3897/natureconservation.34.30218 (2019).
Evangelista, A. et al. Changes in composition, ecology and structure of high-mountain vegetation: a re-visitation study over 42 years. AoB PLANTS 8, https://doi.org/10.1093/aobpla/plw004 (2016).
Rogora, M. et al. Assessment of climate change effects on mountain ecosystems through a cross-site analysis in the Alps and Apennines. Science of the Total Environment 624, 1429–1442, https://doi.org/10.1016/j.scitotenv.2017.12.155 (2018).
Tingley, M. W., Koo, M. S., Moritz, C., Rush, A. C. & Beissinger, S. R. The push and pull of climate change causes heterogeneous shifts in avian elevational ranges. Global Change Biology 18, 3279–3290, https://doi.org/10.1111/j.1365-2486.2012.02784.x (2012).
Lehikoinen, A., Green, M., Husby, M., Kålås, J. A. & Lindström, Å. Common montane birds are declining in northern Europe. Journal of Avian Biology 45, 3–14 (2014).
Strinella, E., Catoni, C., de Faveri, A. & Artese, C. Ageing and sexing of the snow finch Montifringilla nivalis by the pattern of primary coverts. Avocetta 37, 9–14 (2013).
Aichhorn, A. Brutbiologie und Verhalten des Schneefinken in Tirol. Journal of Ornithology 107, 398 (1966).
Ringsby, T. H., Berge, T., Saether, B.-E. & Jensen, H. Reproductive success and individual variation in feeding frequency of house sparrows (Passer domesticus). Journal of Ornithology 150, 469–481, https://doi.org/10.1007/s10336-008-0365-z (2009).
Johnson, R. E. Temperature regulation in the white-tailed ptarmigan Lagopus leucurus. Master thesis, University of Montana, Montana (1968).
West, G. C. Seasonal differences in resting metabolic rate of Alaskan ptarmigan. Comparative Biochemistry and Physiology Part A 42, 867–876 (1972).
Visinoni, L., Pernollet, C. A., Desmet, J.-F., Korner-Nievergelt, F. & Jenni, L. Microclimate and microhabitat selection by the alpine rock ptarmigan (Lagopus muta helvetica) during summer. Journal of Ornithology 156, 407–417, https://doi.org/10.1007/s10336-014-1138-5 (2015).
Oswald, K. N., Smit, B., Lee, A. T. & Cunningham, S. J. Behaviour of an alpine range-restricted species is described by interactions between microsite use and temperature. Animal Behaviour 157, 177–187, https://doi.org/10.1016/j.anbehav.2019.09.006 (2019).
Heiniger, P. H. Zur Ökologie des Schneefinken (Montifringilla nivalis): Raumnutzung im Winter und Sommer mit besonderer Berücksichtigung der Winterschlafplätze. Revue Suisse Zoologie 98, 897–924 (1991).
Wehrle, C. M. Zur Winternahrung des Schneefinken Montifringilla nivalis. Der Ornithologische Beobachter 86, 53–68 (1989).
Scheepens, J. F. & Stöcklin, J. Flowering phenology and reproductive fitness along a mountain slope: maladaptive responses to transplantation to a warmer climate in Campanula thyrsoides. Oecologia 171, 679–691, https://doi.org/10.1007/s00442-012-2582-7 (2013).
Sutherland, W. J. From individual behaviour to population ecology (Oxford University Press, Oxford, 1996).
Carrascal, L. M., Senar, J. C., Mozetich, I., Uribe, F. & Domenech, J. Interactions among environmental stress, body condition, nutritional status, and dominance in great tits. The Auk 115, 727–738 (1998).
Chiffard, J., Delestrade, A., Yoccoz, N. G., Loison, A. & Besnard, A. Warm temperatures during cold season can negatively affect adult survival in an alpine bird. Ecology and Evolution 15, 123, https://doi.org/10.1002/ece3.5715 (2019).
Grangé, J.-L. Biologie de reproduction de la Niverolle alpine Montifringilla nivalis dans les Prénées occidentales Françaises. Nos Oiseaux 55, 67–82 (2008).
Strinella, E., Vianale, P., Pirrello, S. & Artese, C. Biologia riproduttiva del Fringuella Alpino Montifringilla nivalis a campo imeratore nel Parco Nazionale de Gran Sasso e Monti della Laga (AQ). Aula 8, 95–100 (2011).
Stearns, S. C. The evolution of life histories (Oxford University Press, Oxford, 1992).
Ardia, D. R. Tree swallows trade off immune function and reproductive effort differently across their range. Ecology 86, 2040–2046, https://doi.org/10.1890/04-1619 (2005).
Lionello, P. et al. The climate of the Mediterranean region: Research progress and climate change impacts. Regional Environmental Change 14, 1679–1684, https://doi.org/10.1007/s10113-014-0666-0 (2014).
Beniston, M. Climate change in mountain regions: a review of possible impacts. Climatic Change 59, 5–31 (2003).
Maggini, R. et al. Are Swiss birds tracking climate change? Detecting elevational shifts using response curve shapes. Ecological Modelling 222, 21–32 (2011).
Wilson, S. & Martin, K. Influence of life history strategies on sensitivity, population growth and response to climate for sympatric alpine birds. BMC Ecology 12, 9 (2012).
Chamberlain, D. E., Pedrini, P., Brambilla, M., Rolando, A. & Girardello, M. Identifying key conservation threats to Alpine birds through expert knowledge. PeerJ 4, e1723, https://doi.org/10.7717/peerj.1723 (2016).
Cormack, R. M. Estimates of survival from the sighting of marked animals. Biometrika 51, 429–438 (1964).
Jolly, G. Explicit estimates from capture-recapture data with both death and immigration-stochastic model. Biometrika 52, 225–247 (1965).
Article MathSciNet CAS Google Scholar
Seber, G. A. F. A note on the multiple-recapture census. Biometrika 52, 249–259 (1965).
Lebreton, J.-D., Burnham, K. P., Clobert, J. & Anderson, D. R. Modelling survival and testing biological hypotheses using marked animals: a unified approach with case studies. Ecological Monographs 62, 67–118 (1992).
Nichols, J. D., Kendall, W. L., Hines, J. E. & Spendelow, J. A. Estimation of sex-specific survival from capture-recapture data when sex is not always known. Ecology 85, 3192–3201 (2004).
Pledger, S. Unified maximum likelihood estimates for closed capture-recapture models using mixtures. Biometrics 56, 434–442 (2000).
Mayfield, H. F. Suggestions for calculating nest success. Wilson Bulletin 87, 456–466 (1975).
Mallory, E. P., Brokaw, N. & Hess, S. Coping with mist-net capture-rate bias: canopy height and several extrinsic factors. Studies in Avian Biology 29, 151–160 (2004).
Amrhein, V. et al. Estimating adult sex ratios from bird mist netting data. Methods in Ecology and Evolution 3, 713–720 (2012).
Carpenter, B. et al. Stan: A probabilistic programming language. Journal of Statistical Software 76, https://doi.org/10.18637/jss.v076.i01 (2017).
Stan Development Team. RStan: the R interface to Stan (2018). R package version 2.17.3.
Stan Development Team. shinystan: Interactive visual and numerical diagnostics and posterior analysis for Bayesian models (2017).
R Core Team. R: A language and environment for statistical computing (2019).
We thank Parco Nazionale del Gran Sasso e Monti della Laga for hosting and contributing to the realisation this study. We thank the staff of the Reparto Carabinieri Biodiversità L'Aquila and the Giardino Botanico Alpino Campo Imperatore Università degli Studi di L'Aquila. We are grateful to Giuseppe Bogliani, Paolo Pedrini and Ufficio Idrografico e Mareografico Regione Abruzzo for requesting and providing the weather data. We very much appreciated the help of Michael Betancourt for help with coding the mixture model in Stan. We thank Brett K. Sandercock and J. Nichols for helpful comments on the analyses, and Valentin Amrhein, Catriona Morrison, Jules Chiffard and an anonymous referee for improving the manuscript.
Reparto Carabinieri Biodiversità L'Aquila, L'Aquila, Italy
Eliseo Strinella
Museo delle Scienze di Trento (MUSE), Sezione Zoologia dei Vertebrati, Corso del Lavoro e della Scienza 3, 38122, Trento, Italy
Davide Scridel & Mattia Brambilla
Ente Parco Naturale Paneveggio Pale di San Martino, loc. Castelpietra, 2-Tonadico, Trento, Italy
Davide Scridel
Fondazione Lombardia per l'Ambiente, Largo 10 luglio 1976 1, I-20822, Seveso, MB, Italy
Mattia Brambilla
Swiss Ornithological Institute, Seerose 1, CH, 6204, Sempach, Switzerland
Christian Schano & Fränzi Korner-Nievergelt
University of Zurich, Department of Evolutionary Biology and Environmental Studies, Winterthurerstrasse 190, CH, 8057, Zurich, Switzerland
Christian Schano
Fränzi Korner-Nievergelt
E.S. collected the bird data, D.S. prepared the climatic data, F.K. analyzed the data and led the writing process. All authors helped with planning the analyses, discussing the interpretations and conclusions and writing the manuscript.
Correspondence to Fränzi Korner-Nievergelt.
Strinella, E., Scridel, D., Brambilla, M. et al. Potential sex-dependent effects of weather on apparent survival of a high-elevation specialist. Sci Rep 10, 8386 (2020). https://doi.org/10.1038/s41598-020-65017-w
DOI: https://doi.org/10.1038/s41598-020-65017-w
Carole Niffenegger
Mountain surface processes and regulation
Xuyang Lu | CommonCrawl |
Genomic prediction with epistasis models: on the marker-coding-dependent performance of the extended GBLUP and properties of the categorical epistasis model (CE)
Johannes W. R. Martini1,
Ning Gao1,2,
Diercles F. Cardoso1,3,
Valentin Wimmer4,
Malena Erbe1,5,
Rodolfo J. C. Cantet6 &
Henner Simianer1
BMC Bioinformatics volume 18, Article number: 3 (2017) Cite this article
Epistasis marker effect models incorporating products of marker values as predictor variables in a linear regression approach (extended GBLUP, EGBLUP) have been assessed as potentially beneficial for genomic prediction, but their performance depends on marker coding. Although this fact has been recognized in literature, the nature of the problem has not been thoroughly investigated so far.
We illustrate how the choice of marker coding implicitly specifies the model of how effects of certain allele combinations at different loci contribute to the phenotype, and investigate coding-dependent properties of EGBLUP. Moreover, we discuss an alternative categorical epistasis model (CE) eliminating undesired properties of EGBLUP and show that the CE model can improve predictive ability. Finally, we demonstrate that the coding-dependent performance of EGBLUP offers the possibility to incorporate prior experimental information into the prediction method by adapting the coding to already available phenotypic records on other traits.
Based on our results, for EGBLUP, a symmetric coding {−1,1} or {−1,0,1} should be preferred, whereas a standardization using allele frequencies should be avoided. Moreover, CE can be a valuable alternative since it does not possess the undesired theoretical properties of EGBLUP. However, which model performs best will depend on characteristics of the data and available prior information. Data from previous experiments can for instance be incorporated into the marker coding of EGBLUP.
Genomic prediction aims at forecasting qualitative or quantitative properties of individuals based on known genetic information. The genetic information can for instance be given by single-nucleotide-polymorphisms (SNPs) or other kinds of genetic data of individual animals, plant lines or humans. Applied to animals and plants, genomic prediction is of central importance for breeding within the concept of genomic selection [1, 2]. Moreover, genomic prediction can also be used in medicine or epidemiology for risk assessment or prevalence studies of (partially) genetically determined diseases (e.g. [3]). One of the standard approaches for genomic prediction of quantitative traits is based on a linear regression model in which the phenotype is described by a linear function of the genotypic markers. In more detail, the standard additive linear model is defined by the equation
$$ \mathbf{y} = \mathbf{1}\mu + \mathbf{M} \boldsymbol{\beta} + \boldsymbol{\epsilon} $$
where y is the n×1 vector of phenotypes of the n individuals, 1 the n×1 vector with each entry equal to 1, μ the fixed effect and M the n×p matrix giving the p marker values of the n individuals. Moreover, β is the p×1 vector of unknown marker effects and ε a random n×1 error vector with \(\epsilon _{i} {\overset {i.i.d.}{\sim }}\mathcal {N}(0,\sigma _{\epsilon }^{2})\). Since the number of markers p is typically much larger than the number of individuals n, the additional assumption that \(\beta _{j} \overset {i.i.d.}{\sim } \mathcal {N}(0,\sigma _{\beta }^{2})\) is usually made (and all random terms together are considered as stochastically independent). In particular, using an approach of maximizing the density of a certain distribution [4], this assumption allows us to determine the penalizing weight in a Ridge Regression approach which is known as ridge regression best linear unbiased prediction (RRBLUP) and which is fully equivalent to its relationship matrix-based counterpart genomic best linear unbiased prediction (GBLUP)1 [5, 6]. The answer to the question which type of marker coding is appropriate in M depends on the combination of the type of genotypic marker and ploidy of the organism dealt with. For instance, if haploid organisms are considered or presence/absence markers are used, a possible coding for the j-th marker value of the i-th individual M i,j is the set {0,1}. Counting the occurrence of an allele of a diploid organism, the sets {0,1,2} or {−1,0,1}, or rescaled variants can be used. If the marker effects β and the fixed effect μ are predicted/estimated as \(\boldsymbol {\hat {\beta }}\) and \(\hat {\mu }\) on the basis of a training set, the expected phenotypes of individuals from a test set, which were not used to determine \(\boldsymbol {\hat {\beta }}\) and \(\hat {\mu }\), can be predicted by using their marker information in Eq. (1) with \(\hat {\mu },\boldsymbol {\hat {\beta }}\). We will call the difference between the predicted expected phenotype and the estimated fixed effect the predicted genetic value. For the purely additive model of Eq. (1) and a diploid organism with possible genotypes aa, aA and AA for locus j, the choice of how to translate these possibilities into numbers was reported not to affect the predictive ability notably, as long as the difference between the coding of aa and aA is the same as between aA and AA and equal for all markers [5, 7–9]. However, an extension of the additive model, which we call the extended GBLUP model (EGBLUP) [10, 11]
$$ y_{i} = \mu + \sum\limits_{j=1}^{p} M_{i,j} \beta_{j} + \sum\limits_{k=1}^{p}\sum\limits_{j=k}^{p} M_{i,j}M_{i,k} h_{j,k} + \epsilon_{i}, $$
has been shown to exhibit strong coding dependent performance [12, 13]. Here, \(h_{j,k}\overset {i.i.d.}{\sim } \mathcal {N}\left (0,{\sigma ^{2}_{h}}\right)\) is the pairwise interaction effect of markers j and k and all other variables as previously defined (all terms stochastically independent). Compared to Eq. (1), this model additionally incorporates pairwise products of marker values as predictor variables and thus allows us to model interactions between markers. Moreover, the interaction of a marker with itself gives a possibility to model dominance effects (see e.g. [11, 14–16]). The epistasis model of Eq. (2) and some variations with restrictions on which markers can interact have been the main object of investigation in several publications and models incorporating epistasis have been viewed as potentially beneficial for the prediction of complex traits [10, 11, 17–19], but a marker coding dependent performance was observed [12, 13].
In this work, we investigate how the marker coding specifies the effect model for markers with two or three possible values and show how we can find the marker coding for an a priori specified model. We discuss advantages and disadvantages of different coding methods and investigate properties of alternative linear models based on categorical instead of numerical dosage variables. In particular, we show how to represent these models as genomic relationship matrices. Finally, we compare the predictive abilities of different epistasis models on simulated and publicly available data sets and demonstrate a way of using the coding-dependent performance of EGBLUP to incorporate prior information.
Data sets used for assessing predictive ability
Simulated data
A population with 10 000 bi-allelic markers spread across five chromosomes was simulated, using the QMSim software [20]. The size of the first chromosome was 140 centimorgan (cM) with 3 500 markers. Chromosomes 2 to 5 had a size of 110 cM (2 750 markers), 80 cM (2 000 markers), 50 cM (1 250 markers) and 20 cM (500 markers), receptively. In order to allow mutations and linkage disequilibrium establishment, a historical population was simulated with 5 000 individuals (2 500 males and 2 500 females) with random mating for 1 000 generations with constant population size and with a replacement rate of 0.2 for males and females. Then the population size was reduced to 1 000 individuals for 20 additional generations (generation 1 001 to 1 020). The simulated mutation rate was 2.5·10−5.
We used this simulated genotypes as basis and modeled three different types of genetic architecture (purely additive, purely dominant and purely epistatic), each with a varying number of quantitative trait loci (QTL) on top. We chose these types of genetic architecture, without additive effects in the dominance and epistasis scenarios, to make the three scenarios as different as possible. To model the phenotype, out of the 10 000 markers, 200 were drawn randomly from each of the five chromosomes to define in total 1 000 QTL for additive or dominance effects. For the purely additive scenario, the 1 000 additive effects were drawn independently from a \(\mathcal {N}(0,1)\) distribution. For the first additive trait A1, 10 out of the 1 000 QTL were drawn and the genetic values of all individuals were calculated according to the effects of these 10 loci. To define a broad sense heritability of 0.8, the genetic values were standardized to mean 0 and variance 1 and individual errors were drawn from a \(\mathcal {N}(0,0.25)\) distribution. Having added these individual errors to the genetic values, these phenotypes were again standardized to mean 0 and variance 1. For the second trait A2, additional 90 QTL were drawn from the initial 1 000 to give in total 100 QTL for this trait including the QTL of trait A1 with their corresponding effects. Analogously, for A3, all initially drawn 1 000 QTL were used. The standardization procedure was identical to the one previously described for A1. For the comparison of genomic prediction with different relationship models, these 1 000 markers were removed. The relationship matrices were based on the remaining 9 000 markers.
For the dominance scenario D1 (10 QTL), D2 (100 QTL) and D3 (1 000 QTL), we used the same QTL positions as for A1, A2, and A3, respectively, but simulated \(\mathcal {N}(0,1)\)-distributed dominance effects. The standardization procedure to a broad sense heritability of 0.8 was carried out as described before.
For the epistasis traits E1, E2 and E3, 1 000, 10 000 or 100 000 pairs of markers were drawn randomly and for each draw, one of the nine possible configurations of the pair was randomly chosen to have an \(\mathcal {N}(0,1)\)-distributed effect. For instance, having drawn the marker pair j,k, only the configuration (M i,j ,M i,k )=(0,2) was chosen to have an effect, which again was drawn randomly. This was done independently for each trait, which means trait E2 does not necessarily share causal combinations of markers with trait E1. The phenotypes were standardized as described above. Note, that the markers involved in causal combinations were not removed here, since in expectation, every marker is somehow involved in the phenotype of trait E2 and E3.
We repeated this whole procedure, including the simulation of the genotypes, 20 times and compared the different models by their average predictive ability across the 20 repetitions. The simulated data can be found in Additional file 1 of this publication.
Wheat data
The wheat data which we used to compare different methods was published by Crossa et al. [21]. The 1279 DArT markers of 599 CIMMYT inbred wheat lines indicate whether a certain allele is present (1) or not (0). The phenotypic data describes standardized records of grain yield under four environmental conditions.
Mouse data
The mouse data set we used was published and described by Solberg et al. [22] and Valdar et al. [23], and was downloaded from the corresponding website of the Wellcome Trust Centre for Human Genetics. The physical map of single nucleotide polymorphisms (SNPs) was updated to the latest version of the mouse genome (Mus musculus, assembly GRCm38.p4) with the biomaRt R package [24, 25]. Only SNPs mapped to the GRCm38.p4 were used for further analysis. For the remaining markers, the ratio of missing marker values was rather low (0.33%) and we performed a random imputation. The nucleotide coded genotypes were translated to a {0,1,2} coding, where 0 and 2 denote the two homozygous and 1 the heterozygous genotype. SNPs with minor allele frequency (MAF) smaller than 0.01 were excluded from the dataset. Imputation, recoding, and quality control of genotypes were carried out with the synbreed R package simultaneously [26]. A number of 9265 SNPs remained in the dataset for further analysis. We only used individuals with available records for all considered traits for further analysis, which reduced the number of individuals to 1 298. We focused on the provided pre-corrected residuals of 13 traits from which fixed effects of trait-specific relevant covariates such as sex, season, month, have already been subtracted. A detailed description of the traits can be found on the corresponding sites of the UCL. Moreover, the data resulting from quality control and filtering as well as the corrected phenotypes of the traits we used can be found in Additional file 1.
Genomic relationship based prediction and assessment of predictive ability
We used an approach based on relationship matrices for genomic prediction. The underlying concept of this approach is the equivalence of marker effect-based and genomic relationship-based prediction ([5, 10, 11]). Given the respective relationship matrix, the prediction is performed by Eq. (3) (for a derivation of this equation see the supporting information of [11]):
$$ \begin{aligned} {}\left(\begin{array}{c} \hat{\mathbf{g}}_{train}\\ \hat{\mathbf{g}}_{test} \end{array}\right) & =\left[\mathbf{T}_{train} - s^{-1} \left(\begin{array}{cc} \mathbf{J}_{s \times s} & 0 \\ 0& 0 \end{array}\right)\right. \\ & \quad \left. + \sigma_{\epsilon}^{2} \left(\frac{1}{\sigma^{2}_{\beta}} \mathbf{G}^{-1}\right) \right]^{-1} \!\left(\! \left(\begin{array}{c} \mathbf{y}_{train} \\ 0 \end{array}\right) \!- \left(\begin{array}{c} \mathbf{1}_{s} \bar{y}_{train}\\ 0 \end{array} \right)\! \right) \end{aligned} $$
The matrix G is the central object denoting the genomic relationship matrix of the respective model. The variables \(\hat {\mathbf {g}}_{i}\) are the predicted genetic values (expected phenotype minus the fixed effect \(\hat {\mu }\)) of the respective set (training or test set). Moreover, s is the number of genotypes in the training set, 1 s is the vector of length s with each entry equal to 1, J s×s is the analogous s×s matrix with each entry equal to 1 and \(\bar {y}_{train}\) is the empirical mean of the training set. Here, T train denotes the diagonal matrix of dimension n with 0 on the diagonal at the positions of the test set genotypes, and 1 for the training set individuals.
To assess the predictive ability of different models, we chose a test set consisting of ∼ 10% of the total number of individuals (100, 60, or 130 for the simulated, the wheat and the mouse data, respectively). We then used the remaining individuals as a training set and predicted the genetic values for all individuals using Eq. (3). The variance components \(\sigma _{\epsilon }^{2}\) and \(\sigma _{\beta }^{2}\) were estimated from the training set using version 3.1 of the R package EMMREML [27]. The relationship matrix relating the genotypes of the training set was used to estimate the variance components based on the phenotypes of the training set only. The variance components were then used with the complete relationship matrix for the prediction of the genetic values of all individuals in Eq. (3). This procedure was repeated 200 times, with independently drawn test sets. The average correlation r between observed and predicted mean phenotypes of the test set was used as a measure of predictive ability. A description of how the different effect models can be translated into relationship matrices is given in the results. For the Gaussian kernel, we used the bandwidth parameter \(b=2q_{0.5}^{-1}\), with q 0.5 the median of all squared Euclidean distances between the individuals of the respective data. For the simulated data which consisted of 20 independent data sets, we present the average predictive ability and the average standard error of the mean. For the wheat and the mouse data, we used Tukey's 'Honest Significant Difference' test to contrast the performance of the different prediction methods (TukeyHSD() and lm() of R [28]).
Incorporation of prior information by marker coding
As described above, the data we used offers records of different traits or trait ×environment combinations of the same individuals. We will illustrate that the coding-dependent performance of EGBLUP can also be used to incorporate a priori information into the model by choosing the coding for each interaction with already provided data and by using the corresponding relationship matrix for prediction under altered environmental conditions or for a correlated trait. We used for the wheat data the following procedure:
We predicted all the interactions \(\hat {h}_{k,l}\) for a given trait in a given environment, under the use of the {0,1} coding originally provided by Crossa et al. [21] (as described by Martini et al. [11]).
We changed the "orientation" of all markers at once by substituting 0 by 1, and 1 by 0 and predicted all interactions \(\tilde {h}_{k,l}\) under the use of the altered coding.
If the ratio of \( \left |\frac {\hat {h}_{k,l}}{\tilde {h}_{k,l}} \right |\) was greater than or equal to 1, we assumed that the original orientation provided by the data set describes the respective interaction better than the alternative coding.
We then calculated a relationship matrix for each interaction individually by
$$\mathbf{G}_{k,l} = \mathbf{\left(M_{\bullet, k} M_{\bullet, k}^{\prime} \right) \circ \left(M_{\bullet, l} M_{\bullet, l}^{\prime} \right)} $$
with M ∙,k denoting the n×1 vector of marker data of locus k for all individuals in the respective coding which seems to fit the interaction better according to 3) (see [11, 29]). Here, ∘ denotes the Hadamard product.
The overall relationship matrix was then defined by \(\mathbf {G}= \sum \limits _{k=1}^{p} \sum \limits _{l \geq k}^{p}\mathbf {G}_{k,l}\).
We used the data of each environment to calculate an optimally coded relationship matrix for this environment, which was used afterwards for predicting phenotypes in the other environments. The underlying heuristic of step 3) is that a small effect means that the interaction is less important in the respective coding. If the underlying effect model defined by the coding does not capture the data structure, the estimated effect should be close to zero. However, if the effect of a combination is important to describe the phenotype distribution, a larger effect should be assigned (see also Example 1, where the estimated effect is 0, if the underlying parameterization cannot describe the present effect distribution).
For the mouse data, we used the 13 considered traits to construct a relationship matrix for each of them. Each relationship matrix was afterwards used for prediction within the data of the twelve other traits. The two different codings which were compared here, were the {0,1,2} coding based on the imputed originally provided data and its inverted version with 0 and 2 permuted.
In the following, we will highlight aspects of the behavior of the additive effect model of Eq. (1) when the marker coding is altered. These properties of the additive model will afterwards be compared to those of the epistasis model of Eq. (2).
All relationship matrices will be assumed to be positive definite and thus invertible. Mathematical derivations of the illustrated properties can be found in Additional file 2.
Properties of GBLUP
We start with the effect of translations of the coding, that is the addition of a number p j to the initially chosen marker coding of marker j.
(Translation-invariance of GBLUP) Let P denote a vector whose entries give the arbitrary translations p j of the coding of the locus j. Moreover, let the ratio of \(\sigma _{\epsilon }^{2}\) and \(\sigma _{\beta }^{2}\) be known and unchanged if the marker coding is translated. Let \(\boldsymbol {\hat {\beta }}\) and \(\hat {\mu }\) denote the predicted / estimated quantities if the initial coding M is used in the Mixed Model Equation approach of Eq. (1) and let \(\boldsymbol {\tilde {\beta }}\) and \(\tilde {\mu }\) denote the corresponding quantities if the translation \(\tilde {\mathbf {M}}:=\mathbf {M}-\mathbf {1}\mathbf {P'}\) is used instead of M. Then the following statements hold:
\(\tilde {\mu }=\hat {\mu } + \mathbf {P'} \boldsymbol {\hat {\beta }}\)
\(\boldsymbol {\tilde {\beta }}=\boldsymbol {\hat {\beta }}\)
The prediction of the expected phenotype of each genotype is independent of whether M or \(\tilde {\mathbf {M}}\) is used.
The statement of Property 1 has already been discussed in literature [5, 7–9], and we will present a mathematical derivation based on the Mixed Model Equations in Additional file 2. The proof will be a blueprint for the derivation of other properties based on the Mixed Model Equations which can also be found in Additional file 2. Descriptively, we can see the presented invariance with respect to translations the following way: If we change the coding to \(\tilde {\mathbf {M}}:=\mathbf {M}-\mathbf {1P'}\), then \(\tilde {\mathbf {M}}\), \(\tilde {\mu }:=\hat {\mu } + \mathbf {P' \boldsymbol {\hat {\beta }}}\) and \(\boldsymbol {\tilde {\beta }}:=\boldsymbol {\hat {\beta }}\) will fit the phenotypes the same way as M, \(\hat {\mu }\) and \(\boldsymbol {\hat {\beta }}\) do. Thus, the prediction of the marker effects and consequently the prediction of the expected phenotypes of individuals will not be affected by the change of coding as long as the method of evaluating the "goodness of fit", that is the penalizing weight in a Ridge Regression approach remains unchanged. For this reason, it is important to note here that we made the precondition that the ratio of the variance components, which defines the penalty for effect size, will not be changed. This guarantees that the method of how to quantify the "goodness of fit" remains the same. In practice this may not exactly be the case if the vector P has non-identical entries, that is if the translation of the coding is not equal for all loci, since the variance components are usually estimated from the same data and the translation may have an effect on this estimation. However, this effect has been assessed as being negligible in practice [9]. To assess this problem from a theoretical point of view, without preconditions on the changes of \({\sigma ^{2}_{i}}\), the method for determining the variance components has to be taken into account to see whether a change in the marker coding has an influence on the ratio of the determined variance components. The next property considers the effect of rescaling the given marker coding.
Let \(\boldsymbol {\hat {\beta }}\), \(\hat {\mu }\), \(\boldsymbol {\tilde {\beta }}\) and \(\tilde {\mu }\) denote the quantities as defined in Property 1 with \(\tilde {\mathbf {M}}:=c \mathbf {M}\) for a c≠0. Moreover, let \(\sigma _{\epsilon }^{2}\) and \(\sigma _{\beta }^{2}\) for M be known and let the variance components used for the Ridge Regression approach based on \(\tilde {\mathbf {M}}\) fulfill \(\frac {\tilde {\sigma }_{\epsilon }^{2}}{\tilde {\sigma }_{\beta }^{2}}=c^{2}\frac {\sigma ^{2}_{\epsilon }}{\sigma _{\beta }^{2}}\). Then the following statements hold:
\(\tilde {\mu }=\hat {\mu } \)
\(\boldsymbol {\tilde {\beta }}=c^{-1}\boldsymbol {\hat {\beta }}\)
An important aspect of Property 2 is the precondition that the ratio of the variance components is adapted. In practice, when \(\sigma _{\beta }^{2}\) is estimated, we can assume that this circumstance will approximately be given, however, we have to highlight again that this also depends on the method of how the variance components are determined.
Epistasis models of shape of Eq. (2)
The full EGBLUP model of Eq. (2) adds interaction terms of shape h j,k M i,j M i,k to the additive model of Eq. (1). We will focus on the properties of these additional terms in the following. Evidently, the product structure of the additional covariates generates a dependence of the underlying effect model on the marker coding. In particular, the genotype coded as zero has a special role. If M i,j equals zero, the whole term h j,k M i,j M i,k will be equal to zero, independently of the values of h j,k and M i,k . Thus, the model has the implicit assumption that a certain set of combinations do not interact. The marker coding decides which interactions are different from zero a priori and which combinations are clustered. For instance, for the coding {−1,0,1} for the genotypes {a a,a A,A A} of a diploid organism, any interaction with a heterozygous locus will be zero, whereas the interactions with the homozygous locus aa will be zero if the coding {0,1,2} is used. Table 1 illustrates the differences of the two different standard codings ({−1,0,1} vs. {0,1,2}). Here we see that the marker coding {0,1,2} implies that the effect is monotonously increasing (or decreasing if h j,k is negative) with the distance from the origin, whereas the coding {−1,0,1} gives a different topology by only giving weight to the double homozygous. It is not obvious which coding is to be preferred and which reasonable assumptions on the effect of pairs can be made. In the following, we will discuss theoretical properties of the model induced by the marker coding.
Table 1 Comparison of the interaction effects which are given implicitly by the marker coding {−1,0,1} (left) and {0,1,2} (right) in the interaction terms of EGBLUP. Each entry has to be multiplied with the interaction effect h j,k
As a first important observation, we note that the codings {−1,0,1} and {0,1,2} are translations of each other. Their very different interaction effect topologies illustrate that the epistasis model is not invariant with respect to translations. This fact that translations modify the model also makes obvious that by subtracting the matrix 1 P ′ with P containing the allele frequencies of the respective marker, which is the standard normalization in the additive model [6], we will change the coding for the markers according to their frequencies and thus implicitly use different effect models for each pair of loci. We do not see a theoretical basis for this discrimination in an infinitesimal model without additional prior knowledge and therefore will consider mainly models which treat markers equally. Moreover, as gene frequencies are sometimes poorly estimated and very influential, avoiding their use seems to be appealing.
As illustrated, the epistasis model is not invariant with respect to translations, but we show now that the previously described invariance with respect to rescaling persists also for the epistatis model.
Let \(\boldsymbol {\hat {\beta }}\), \(\hat {\mu }\), \(\boldsymbol {\tilde {\beta }}\) and \(\tilde {\mu }\) denote the quantities as defined in Property 1 with \(\tilde {\mathbf {M}}:=c \mathbf {M}\) for a c≠0. Moreover, let \(\boldsymbol {\hat {h}}\) and \(\boldsymbol {\tilde {h}}\) denote the corresponding predictions for the interaction effects. Let \(\sigma _{\epsilon }^{2}\), \(\sigma _{\beta }^{2}\), \({\sigma _{h}^{2}}\) for M be known and let the variance components used for the Ridge Regression approach based on \(\tilde {\mathbf {M}}\) fulfill \(\frac {\tilde {\sigma }_{\epsilon }^{2}}{\tilde {\sigma }_{\beta }^{2}}=c^{2}\frac {\sigma ^{2}_{\epsilon }}{\sigma _{\beta }^{2}}\) and \(\frac {\tilde {\sigma }_{\epsilon }^{2}}{\tilde {\sigma }_{h}^{2}}=c^{4}\frac {\sigma ^{2}_{\epsilon }}{{\sigma _{h}^{2}}}\). Then the following statements hold:
\(\boldsymbol {\tilde {h}}=c^{-2}\boldsymbol {\hat {h}} \)
A formal derivation of this property based on the Mixed Model Equations can be found in the Additional file 2, but the statements are also plausible if we follow the descriptive argumentation for the invariance of the additive model: If \(\hat {\mu }\), \(\boldsymbol {\hat {\beta }}\) and \(\boldsymbol {\hat {h}}\) fit the phenotypic data best when marker matrix M is used, \(c^{-1}\boldsymbol {\hat {\beta }}\) and \(c^{-2}\boldsymbol {\hat {h}}\) will fit the phenotypic data the same way if M is substituted by \(\tilde {\mathbf {M}}\) in Eq. (2) (for any constant c≠0). The important precondition is that the penalizing weight, which defines which fit is "best", is adapted. A question that might come up in the context of Properties 2 and 3 is whether we could also multiply each coding for locus j with its own constant c j ≠0, similar to what we had for Property 1 and vector P. A problem that will appear here is that the variance of the marker effects will not be changed uniformly and thus, we cannot simply adapt the variance components to cancel the impact of rescaling. An individual rescaling and thus weighting of each marker [30], as well as a completely individual coding of each genotype of each locus, without the side conditions that the differences in the coding of the heterozygous and the two homozygous genotypes are identical across all loci or at least symmetric for each locus [12, 13], indeed has an impact on the predictive ability of the models, in particular also on that of GBLUP. However, the variance components \({\sigma _{i}^{2}}\) can be globally adapted to cancel the impact of a non-uniform rescaling of the marker coding, in case that some columns of M are multiplied with c and the others with −c (due to the assumption of all effects being symmetrically distributed around mean zero). An adapted sign of the effects also allows the predicted effect model to remain unchanged.
Permuting the role of the alleles at locus j . Let locus j have the possible allele configurations aa, aA and AA. The prediction performance of GBLUP is unaffected by the choice of whether the allele variant a or A is counted, since we can express a permutation of the initial coding {0,1,2} by a translation by −2 and a multiplication of the coding by −1.
Obviously, this argumentation cannot be used for the epistasis model, since we do not have the possibility to translate the marker coding. This fact raises the question under which circumstances the epistasis EGBLUP model is unaffected by a permutation of the role of the allele variants.
Let us consider locus j with alleles a and A and locus k with alleles b and B (of a diploid organism). Let us use the same coding for both loci and let the three variants of aa, aA and AA be coded by three different numbers M aa <M aA <M AA (or M aa >M aA >M AA ). The only coding for the epistasis terms, whose corresponding effect model on the tuples
$$ \left\{ (j,k) | j \in \{aa,aA,AA\}, k \in\{bb,bB,BB\} \right \} $$
is invariant with respect to a permutation of the role of allele a and A satisfies −M aa =M AA and M aA =0. Analogously, for markers with only two possible values, the coding has to satisfy −M a =M A .
Property 4 is of central theoretical importance since it implies that the only coding for {0,1} marker in EGBLUP, which is invariant with respect to a permutation of the meaning of 0 and 1 is the coding {−c,c} (c≠0). Moreover, if EGBLUP shall possess this reasonable property for markers with three possible values, we have to use the coding {−c,0,c}. We will give an example to illustrate why this property is important for determining marker effects and thus why it may also be important for the overall predictive ability of the model.
Let us consider markers with two possible variants and let us assume that for each pair of markers, the correct underlying weights of the combinations is given by a coding as {0,1}. We use a {0,1} coding, but we do not know which variants of the two loci have to be coded as 1 to capture the real effect distribution. We assume that we decide which allele is coded as zero, by drawing independently from a Bernoulli-distribution with p=0.5 for each marker. To see how good the real underlying weight distribution is captured, we measure the quadratic loss between the best possible fit and the real underlying weights. Let the coding
$$ \begin{array}{c | c | c} & a & A \\ b & 0& 0 \\ B & 0 & 1\\ \end{array} $$
be the correct underlying effect distribution, with the corresponding underlying interaction effect equal to 1 (the problem remains the same if the underlying interaction effect is multiplied with any number c≠0). With a probability of 0.25, we will code both markers j and k correctly and minimize the distance to zero by predicting \(\hat {h}_{j,k}=1\). However, with a probability of 0.75, we will make a mistake and choose an incorrect orientation, which means an incorrect underlying parametric model, such as
$$ \begin{array}{c|c|c} & a & A \\ b & 1 \cdot \; h_{j,k} & 0 \\ B & 0 & 0\\ \end{array} $$
In this situation, we can determine the optimally fitting interaction \(\hat {h}_{j,k}\), which describes the distribution of Eq. (4) best, when model Eq. (5) is used, by minimizing the quadratic Euclidean distance between both effect distributions. In more detail, using a minimal quadratic loss means we have to find an \(\hat {h}_{j,k}\) which minimizes the quadratic distance between the matrices of Eq. (4) and Eq. (5):
$$ (1h_{j,k}-0)^{2}+(0-0)^{2}+(0-0)^{2}+(0-1)^{2} $$
$$h_{j,k}^{2}+1. $$
Thus, the optimal \(\hat {h}_{j,k}\) minimizing Eq. (6) is 0 and the expected quadratic loss when the right coding with unknown orientation is used, is 0.25·0+0.75·1=0.75.
Analogously, if we use the coding {−1,1} instead of Eq. (5), we will obtain the quadratic distance
$${}3(h_{j,k}-0)^{2} + (h_{j,k}-1)^{2} \qquad \text{or} \qquad 3(h_{j,k}-0)^{2} + (h_{j,k}+1)^{2} $$
each with probability 0.5, depending on whether −1 or +1 coincides with the 1 of the real underlying effects. Consequently, the minimum quadratic distance is 0.75 with probability 1, for \(\hat {h}_{j,k}= \pm 0.25\). Thus, in this example, even though the coding {−1,1} specifies a model which is surely wrong, the average quadratic loss is equal to the situation in which we know the exact shape of the effect distribution but not its orientation. If the real underlying effect distribution deviates from the {0,1} coding of Eq. (4), the possibility to adapt the orientation might be even more important.
Example 1 illustrated that the expected quadratic loss of the estimated marker-pair weights is equal for the codings {−1,1} and {0,1} even in the case that the underlying effects are a version of the latter one but with unknown orientation. Moreover, we can observe the following: Let us assume that the real underlying interactions (j,k),(j,l) and (k,l) of the three loci j,k,l are described by certain {0,1}-codings, meaning that one certain configuration has an interaction effect but the others do not. Given the underlying effects, we can adapt the coding of j,k and l by considering the effects of the pairs (j,k),(j,l). However, then the effect distribution within the model is also determined for the pair (k,l), because the marker coding has already been fixed. This configuration does not necessarily describe the interaction of (k,l) well. This fact illustrates that due to the way of how interactions are incorporated into the model in EGBLUP, the model with an asymmetric coding lacks a full flexibility to adapt to any situation. This problem does not appear with the symmetric coding, since the model is independent of the decision which allele is coded as ±1. However, there are also good reasons for choosing other types of coding. Firstly, it is not clear whether the effect that we have illustrated on the level of marker effects and quadratic loss, also translates to the level of prediction of genetic values. In the latter approach, all effects are predicted simultaneously and thus errors of individual effects can cancel out in the sum. Secondly, from a biological point of view, the symmetric coding seems inadequate: Let us consider markers with two variants and let the two loci j and k have the possible variants a,A and b,B, respectively. The symmetric coding {−1,1} assigns the weight 1h j,k to the combinations (a,b) and (A,B), meaning that the most distant genotypes, which do not share any allele, are treated as being equal in the model. Thus, overall, it is not clear which coding will be most appropriate in general. Especially in situations in which additional information on the nature of the marker or the biology of the trait is available, this information may be used to specify the effect model. In the next paragraph, we illustrate how much freedom the marker coding gives to specify the model.
Finding the marker coding for an a priori specified model. Let us consider a model with identical marker coding M aa , M aA and M AA for each locus. Then the weights in the model are given by
$$\begin{array}{@{}rcl@{}} a_{1,1}=M_{aa}^{2} & a_{1,2}=M_{aa}M_{aA} & a_{1,3}=M_{aa}M_{AA}\\ a_{2,2}=M_{aA}^{2} & a_{2,3}=M_{aA}M_{AA} & a_{3,3}=M_{AA}^{2}. \end{array} $$
If we want to predefine the weights a r,s and calculate a corresponding coding, we see that not all choices of weights can be translated into a coding for the epistasis model of Eq. (2) since contradictions can arise. However, the following statement holds:
Let three weights a r,s of Eq. (7) which include the three variables M aa , M aA , M AA in at least one weight a r,s be given by arbitrary nonzero numbers. Then the marker codings as well as the other weights are determined up to their signs.
Categorical effect models
In the following, we discuss categorical effect models in which we do not treat the marker data as numerical dosage, but as categorical variables. The goal is to build an epistasis model without the undesired properties of EGBLUP which have been described previously. We model the effects of allele combinations as being independently drawn from a Gaussian distribution with mean zero. For instance, for an additive marker effect model, the effects of aa, aA and AA are independently originating from the same distribution. For the analogous epistasis model, the effect of each combination of the alleles of two loci is drawn independently from the same distribution. We will introduce dummy {0,1} variables to indicate which allele configuration is present and thus inflate the number of variables in our model. The important fact to notice in this context is that we can use a relationship matrix approach for genomic prediction (see "Methods") and thus do not need to handle the high number of variables. This procedure also reduces computation time compared to the effect based approach. All considered effects β j of the variables are assumed to come from the same distribution: \(\beta _{j}\overset {i.i.d.}{\sim } \mathcal {N}(0,\sigma _{\beta }^{2})\).
A categorical marker effect model (CM) The underlying concept of this model is to code the configurations a a,a A,A A of locus j as three different variables. The effect of each genotype is estimated on its own. The assumption of a constant allele substitution effect, that is that the effect of AA equals twice the effect of A, which is made in the additive numerical GBLUP model, is not made here (see Fig. 1). We translate the genotypes (a a,a A,A A) which can be found at locus j to ((0,0,1),(0,1,0),(1,0,0)). The latter triples indicate which of the three states is present. A genotype of three loci described by (2,0,1) in the numerical GBLUP coding, will here be coded by the nine-tuple (1,0,0,0,0,1,0,1,0) (a triple for each locus, describing its state). We then simply use model Eq. (1) with the new coding. Advantages of this model are that it is also invariant to an exchange of the role of a and A (as GBLUP of Eq. (1) is as well), since we will only permute the meaning of the positions in the triple but change their entries accordingly. Moreover, we can account for dominance by estimating each effect on its own. A disadvantage is the increased number of variables but this can be overcome easily by the use of relationship matrices for genomic prediction. Property 6 describes the relation between the CM model and GBLUP for markers with only two possible values:
Comparison of the parametrization of the genotypic values in GBLUP and the categorical marker effect model CM: Black dots: genotypic values of the corresponding genotype of a certain locus. GBLUP parameterizes the genotypic values by a fixed effect (red dot) and a random effect determining the slope (blue line), whereas CM parameterizes by the fixed effect (red line) and independent random effects (blue lines) for each genotype
For markers with only two possible states, let M denote the n×p marker matrix in the {−1,1} coding. The relationship matrix of GBLUP is given by (a rescaled version of) M M ′. Moreover, let C be the relationship matrix of the CM model. Then
$$ \mathbf{C}= 0.5 (\mathbf{MM'}+\mathbf{J}_{n\times n} p) $$
where p is the number of markers and J n×n the n×n matrix with each entry equal to 1.
The linear relationship of the covariance matrices demonstrated in Property 6 implies that the prediction performances of GBLUP and CM are identical for markers with only two possible values.
Let us assume that the ratio of the variance components is fixed such that Property 1 holds for the CM model. Then GBLUP and the CM model are identical for markers with only two possible values.
A categorical epistasis model (CE) Analogously to the CM model, we translate the genotype of pairs of loci, e.g. (a A,b b) into {0,1}-tuples. Here, a nine-tuple indicates which combination of alleles of two loci is present. To translate the genotype (2,0,1) of the numerical {0,1,2} coding into the CE coding, we have to translate each marker pair. Each pair is coded by a nine-tuple with only one entry equal to 1 which indicates the configuration:
$$ \left(\underbrace{\bullet}_{(2,2)},\underbrace{\bullet}_{(2,1)},\underbrace{\bullet}_{(2,0)}, \underbrace{\bullet}_{(1,2)},\underbrace{\bullet}_{(1,1)},\underbrace{\bullet}_{(1,0)}, \underbrace{\bullet}_{(0,2)},\underbrace{\bullet}_{(0,1)},\underbrace{\bullet}_{(0,0)}\right). $$
The assignment of the configuration of the respective marker pair to the position of the nine-tuple can be chosen arbitrarily but has of course to be used consistently for all individuals. Let us assume that we have three subsequent loci with genotypes (2,0,1) in the ordinary numerical coding. Then, there are three possible interactions: the first two loci have the combination (2,0) which will be coded as (0,0,1,0,0,0,0,0,0). Additionally, the second pair is (2,1) which will be coded as (0,1,0,0,0,0,0,0,0), whereas the last pair (0,1) is translated to (0,0,0,0,0,0,0,1,0). As already mentioned, an obvious disadvantage of the model is the high number of variables, but we do not have to solve the system for these variables to perform genomic prediction, since we can use equivalent genomic relationship matrices. Moreover, this model eliminates several disadvantages of EGBLUP: i) The model is invariant with respect to the decision which allele is used as reference ("orientation"), since it is based on categorical variables indicating which genotype is present, ii) the effects the model can assign to different pairs of loci are not connected between pairs by their respective codings (as described for the asymmetrically coded EGBLUP after Example 1), and iii) compared to the symmetric {−1,0,1} coding of EGBLUP, CE does not generally assign the same effects to the most different allele combinations.
Relationship matrices for the respective marker models
Let M be the marker matrix of the respective numerical coding (0,1,2 or −1,0,1). In the following, we will present the corresponding relationship matrices for each model.
GBLUP. The relationship matrix for the GBLUP model is given by M M ′ (the n×p genotype matrix multiplied with its transposed version).
Epistasis models based on Eq. (2). The relationship matrix corresponding to the interactions of Eq. (2) where j≥k is given by
$$ \mathbf{H} = 0.5 \left(\mathbf{MM' \circ MM'}\right) + 0.5 \left(\mathbf{M \circ M}\right) \left(\mathbf{M \circ M}\right)'. $$
(for a derivation of this statement see [11]). Note here again that the GBLUP model is not affected by a translation of the coding in M, but the performance of EGBLUP is affected.
The categorical marker (CM) effect model The i,l-th entry of the corresponding relationship matrix C is given by the inner product of the vectors of the genotypes of individuals i and l in the coding of the CM model. This means that we count the number of loci which have the same configuration. For markers with two possible variants and the marker data in dosage 0,1 coding, we can express the i,l-th entry of C the following way:
$$ C_{i,l} = p - \sum\limits_{j=1}^{p} \left|M_{i,j} - M_{l,j}\right| $$
Analogously, for markers with three different variants, we have to count the number of zeros in the marker vectors M i,∙−M l,∙ (For the relation of Eqs. (11) and (8), see the derivation of Eq. (8) in Additional file 2).
The categorical epistasis (CE) model The i,l-th entry of the corresponding relationship matrix C E is given by the inner product of the genotypes i, l in the coding of the categorical epistasis model. Thus, the matrix counts the number of pairs which are in identical configuration and we can express the entry C E i,l in terms of C i,l since we can calculate the number of identical pairs from the number of identical loci:
$$ {C_{E}}_{i,l}= \sum_{k=1}^{C_{i,l}} k =0.5 C_{i,l} \left(C_{i,l} + 1 \right) $$
Here, we also count the "pair" of a locus with itself by allowing k∈{1,…,C i,l }. Excluding these effects from the matrix would mean, the maximum of k equals C i,l −1. In matrix notation Eq. (12) can be written as
$$ \mathbf{C}_{E}= 0.5 \mathbf{C} \circ \mathbf{C} + 0.5 \mathbf{C} $$
Note here, that the relation between GBLUP and the epistasis terms of EGBLUP is identical to the relation of CM and CE in terms of relationship matrices: For G = M M ′ and M a matrix with entries only 0 or 1, Eq. (10) gives Eq. (13) with C=G and C E =H.
Additionally to the previously discussed EGBLUP model, a common approach to incorporate "non-linearities" is based on Reproducing Kernel Hilbert Space regression [21, 31] by modeling the covariance matrix as a function of a certain distance between the genotypes. The most prominent variant for genomic prediction is the Gaussian kernel. Here, the covariance C o v i,l of two individuals is described by
$$ {Cov}_{i,l} = \exp(-b \cdot d_{i,l}), $$
with d i,l being the squared Euclidean distance of the genotype vectors of individuals i and l, and b a bandwidth parameter that has to be chosen. This approach is independent of translations of the coding, since the Euclidean distance remains unchanged if both genotypes are translated. Moreover, this approach is also invariant with respect to a scaling factor, if the bandwidth parameter is adapted accordingly (in this context see also [ 32 ]). Thus, EGBLUP and the Gaussian kernel RKHS approach capture both "non-linearities" but they behave differently if the coding is translated.
Comparison of the performance of the models on different data sets
Results on the simulated data For 20 independently simulated populations of 1 000 individuals, we modeled three scenarios of qualitatively different genetic architecture (purely additive A, purely dominant D and purely epistatic E) with increasing number of involved QTL (see "Methods") and compared the performances of the considered models on these data. In more detail, we compared GBLUP, a model defined by the epistasis terms of EGBLUP with different codings, the categorical models and the Gaussian kernel with each other. All predictions were based on one relationship matrix only, that is in the case of EGBLUP on the interaction effects only. The use of two relationship matrices did not lead to qualitatively different results (data not shown), but can cause numerical problems for the variance component estimation if both matrices are too similar. For each of the 20 independent simulations of population and phenotypes, test sets of 100 individuals were drawn 200 times independently, and Pearson's correlation of phenotype and prediction was calculated for each test set and model. The average predictive abilities of the different models across the 20 simulations are summarized in Table 2 in terms of empirical mean of Pearson's correlation and its average standard error. Comparing GBLUP to EGBLUP with different marker codings, we see that the predictive ability of EGBLUP is very similar to that of GBLUP, if a coding which treats each marker equally is used. Only the EGBLUP version, standardized by subtracting twice the allele frequency as it is done in the commonly used standardization for GBLUP [6], shows a drastically reduced predictive ability for all scenarios (see Table 2, EGBLUP VR). Moreover, considering the categorical models, we see that CE is slightly better than CM and that both categorical models perform better than the other models in the dominance and epistasis scenarios.
Table 2 Predictive abilities of the models on the simulated data. Comparison of the predictive abilities in terms of correlations between the measured phenotypes and the predictions for the individuals of the test sets ("Pearson's correlation"; 100 test set genotypes were drawn randomly from all 1000 genotypes; 200 repeats for each simulated population; 20 independent simulations of population and phenotypes). Traits of different genetic architecture (additive A, dominant D, Epistasis E) and increasing number of QTL. Model abbreviations as introduced in the text. For EGBLUP, only the matrix based on the interactions was considered here
Results on the wheat data For EGBLUP, we used here the coding {0,1} which was originally used in the data of the publication, a translation by −1 which leads to {−1,0} representing a coding in which the meaning of 0 and 1 is permuted, and a centered version {−1,1}. Moreover, we used the standardization by allele frequencies [6] to calculate EGBLUP. Additionally, we evaluated CM, CE and reevaluated the Gaussian kernel RKHS approach, previously used by Crossa et al. [21] (we used the matrix K obtained from the supplementary of the corresponding publication). The results are summarized in Table 3. CM showed exactly identical results to those of GBLUP (which has already been stated theoretically by Property 7) and is therefore not listed separately. Considering the predictive ability of EGBLUP with different codings, a first thing to note is that the variability among the EGBLUP variants is higher than that found on the simulated data. Moreover, with the data sets of environments 1,3 and 4, EGBLUP tends to outperform GBLUP. Among them, the model with symmetric {−1,1} coding performs best and the VanRaden standardized version of EGBLUP has a significantly reduced predictive ability for the data of environments 1, 2 and 3, which is analogous to what we have already seen on the simulated data. Moreover, the predictive ability of EGBLUP with symmetric coding seems to be closest to that of the Gaussian kernel. For the data of environment 2, no big differences in the performance of the models (except for the allele frequency standardized EGBLUP) can be observed. Overall, the Gaussian kernel RKHS method performs best on this data set and the predictive ability of the CE model is on the level of the asymmetrically coded versions of EGBLUP.
Table 3 Predictive abilities of the models on the wheat data. Comparison of the predictive abilities as Pearson's correlation of the measured phenotypes and the predictions for the individuals of the test sets (60 test set genotypes, trait: grain yield)
Results on the mouse data We compared the models on 13 traits related to obesity, weight and immunology. Instead of the raw phenotypes, we used pre-corrected residuals which are publicly available (see "Methods"). Again, we compared GBLUP, EGBLUP with 0,1,2 coding as well as with inverted, symmetric and by allele frequencies standardized coding, the categorical models and the Gaussian kernel RKHS approach with each other. The results are summarized in Table 4. The general patterns observed on the previously considered data remain the same: Any EGBLUP version treating the markers equally has at least the same predictive ability as GBLUP for all traits. Among them, the symmetric coding seems to perform best. The allele frequency standardized version of EGBLUP has in three of the 13 traits a higher predictive ability than its other versions (W6W, GrowthSlope, CD8Intensity), but a smaller one in ten cases. Considering only significant differences between CM and GBLUP, CM outperforms GBLUP on the traits %CD4/CD3 and %CD8/CD3 and shows a lower predictive ability only for BMI and BodyLength. Moreover, CE outperforms CM slightly. Overall, two traits are predicted best by EGBLUP VR, three traits by CE, and five by the symmetric version of EGBLUP and the Gaussian kernel, respectively.
Table 4 Predictive abilities of the models on the mouse data. Comparison of the predictive abilities as Pearson's correlation of the measured phenotypes and the predictions for the individuals of the test set (130 test set genotypes). Here, the already for fixed effects pre-corrected residuals of the phenotypes, which are also provided by the publicly available data, were used
Incorporating prior experimental information by marker coding
The coding-dependent performance of EGBLUP also offers possibilities to incorporate additional information. He et al. [12 , 13] have already illustrated the idea of data-driven coding and we have recently shown that information on the performance of genotypes grown under different environmental conditions can be used to select variables within EGBLUP which then can be used for genome assisted prediction within another environment [11]. Here, we will demonstrate that differential coding is also appropriate to incorporate prior experimental information into EGBLUP. For this, we used the different trait (× environment) combinations and adapted the marker coding of each pair of loci to the data, following the procedure described in the "Methods" section. Important here is that we decided for each pair of markers individually, which orientation the corresponding coding of the particular pair shall have. The "orientation" of the underlying effect model is chosen for each pair. Thus, we cut the connection between the coding of different pairs. The determined relationship matrices are then used to predict within the data of other traits. The results are summarized in Tables 5 and 6 for the wheat and mouse data sets, respectively. We can see here that adapting the coding to data of previous experiments can be beneficial for the predictive ability. In the case of the wheat data set, Table 5 shows that using the data of grain yield of the genotypes grown in environments 3 and 4 to infer the marker coding for each pair of marker, improves the prediction accuracy in environment 2 to a level higher than that of all methods which do not use the data of other experiments (from 0.504±0.007 to 0.544±0.006). The situation is analogue for the predictive ability in environment 3, if the data of environment 2 is used to infer the relationship matrix. However, the gain in predictive ability resulting from this procedure is relatively small compared to the gain by means of variable selection [11]. Adapting the coding to given data also helped to increase predictive ability on the mouse data (see Tables 4 and 6). For instance, improvements from 0.285±0.006 to 0.313±0.005, from 0.536±0.004 to 0.569±0.004, and from 0.664±0.004 to 0.685±0.003 were reached for the traits BodyLength, %CD3 and %CD4/CD3, respectively.
Table 5 Predictive abilities on the wheat data when prior information is incorporated in the marker coding of EGBLUP. Predictive abilities when the coding for each interaction is determined based on records under different environmental conditions
Table 6 Predictive abilities on the mouse data when prior information is incorporated in the marker coding of EGBLUP. Predictive abilities when the coding for each interaction is determined based on the records of other traits
The effect of the choice of marker coding on EGBLUP
We recalled that GBLUP is not sensitive to certain changes of the marker coding if the variance components are adapted accordingly. Analogously, we also proved that the interaction terms of EGBLUP are invariant to factors rescaling the marker coding, but showed that a translation indeed changes the underlying marker effect model drastically. In particular, we demonstrated that the effect model of EGBLUP with the asymmetric 0,1,2 coding is affected by the decision which allele to count. Thus, an important observation concerning EGBLUP is that the only coding allowing a permutation of the roles of the alleles without changing the underlying interaction effect model for the respective marker pair is symmetric around zero. This coding solves the problem of "which allele to count", but we also argued that the symmetric coding appears to be biologically implausible since it assigns the same interaction effect to the most distant genotypes. Concerning the allele frequency adjusted version EGBLUP VR, we illustrated that the different markers are not treated equally and thus that the interaction effect models here depend on the allele frequencies of the involved alleles. On the level of predictive ability, the symmetric coding tends to outperform the asymmetric versions slightly, which can most clearly be seen from the data of environment 1 and 4 of the wheat data set (Table 3). Also with the mouse data set, the symmetric coding had a higher predictive ability than the other codings treating all loci equally for all traits, but the improvements were most often very small. Concerning the allele-frequencies standardized version EGBLUP VR, we observed a drastic reduction in the predictive ability compared to other EGBLUP versions in most of the examples. Illustratively, one reason for the comparatively poor performance can be seen in the following: the relationship matrix corresponding to the interaction effects of EGBLUP in a certain coding is basically the GBLUP relationship matrix, but with each of its entries squared (if all pairwise interactions and interactions of a marker with itself are modeled, see [10 , 11] and compare to Eq. (10)). The standardization by twice the allele frequencies (and division by a certain factor representing a variance [6]) produces a GBLUP matrix which can possess entries larger than 1 and smaller than 0. In particular, if the GBLUP matrix has negative entries, squaring them changes the order of the relationship between the individuals. For instance, if A has a relation of −0.1 with individual B and −0.3 with individual C, which means that A is more closely related to B than to C, the corresponding EGBLUP matrix states that the relation between A and C is closer than that of A and B. This argumentation is equally true for the symmetric coding, but the portion of negative entries in the corresponding additive relationship matrix was close to zero for the wheat and the mouse data set when the symmetric coding was used in our examples. Overall, in spite of a certain popularity of EGBLUP in recent literature [10 , 11 , 17] our results suggest that the use of products of marker values as predictor variables is not the best way to incorporate interactions into the GBLUP model. Moreover, contrary to the theoretical findings on the "congruency" of EGBLUP and the Gaussian kernel in a RKHS approach [10], our results show that both methods respond in a different way to a change of marker coding: a translation of the coding has an impact on the predictive ability of EGBLUP, but not on that of the Gaussian kernel. Since the Euclidean distance between two vectors will not change under a translation of both vectors, the corresponding relationship matrix remains identical. A reconsideration of the limit behavior of EGBLUP when the degree of interaction increases to n-factor interaction (and n→∞) may therefore be interesting from a theoretical point of view.
To develop an alternative to EGBLUP which does not possess the illustrated undesired theoretical properties, but which –unlike the RKHS approaches– allows to interpret the predicted quantities as "effects", we considered the categorical effect models (The effects of the categorical models can be explicitly calculated from phenotypes or genetic values under the use of the well-known Mixed Model formulas for effects with the respective design matrices). As a first step, we constructed the categorical marker effect model CM, which does not use the assumption of a constant allele substitution effect (Fig. 1) and thus gives the possibility to model (over)dominance by modeling an independent effect of each genotype at a locus. The fact that this property can also lead to an increase in predictive ability was illustrated by the simulated dominance scenario. An important result is that this categorical model can be rewritten as a relationship matrix model and thus provides an equivalent to the Ridge Regression/GBLUP duality, but based on a categorical effect model instead of a numerical dosage model. Whether this model increases predictive ability will always depend on the population structure and the influence of dominance effects on a particular trait. For instance, if a population originating from lines from different heterotic pools is considered, the prevalent heterosis effect might be a good reason to use CM instead of GBLUP, since heterosis creates a deviation from the linear dosage model. Moreover, the number of heterozygous and homozygous loci in the data set is important. If most loci are mainly present in only two of the three possible SNP genotypes, CM cannot outperform GBLUP substantially. Interestingly, comparing GBLUP and CM, CM was only significantly outperformed on the traits BMI and BodyLength. Thus, abandoning the assumption of a dosage effect of an allele, which is implemented by counting its occurrence and multiplying it with an additive effect, might not in general be a problem for prediction. Note also that there are other ways of defining marker based dominance matrices as for instance described by Su et al. [33]. Moreover, dominance can implicitly be modeled by an epistatic interaction term of a locus with itsself in Eq. (2) if j=k (see [11]).
Analogously to the relation of GBLUP and EGBLUP, we extended the categorical marker effect model CM to the categorical epistasis model CE. The disadvantage of inflating the model with a huge number of variables is solved for genomic prediction by using an equivalent relationship-matrix-based approach. Interestingly, the analogy of the relation between GBLUP and EGBLUP also translates to the level of relationship matrices, which we illustrated by the theoretical result of Eq. (13). The relationship matrix of CE has the same connection to the relationship matrix of CM as the matrix defined by the interaction terms of EGBLUP has to the genomic relationship matrix of GBLUP. Moreover, CE eliminates undesired theoretical properties of EGBLUP: the question which allele to use as reference is not raised, its structure does not lead to a dependence of the effect models of different pairs of loci, and it does not assign the same effects to the most different allele combinations as the symmetrically coded EGBLUP model does. With the wheat data which consist of markers with only two possible values and for which GBLUP coincides with CM, CE outperformed GBLUP in all environments (Table 3). Moreover, CE slightly improved the predictive ability of CM for all considered traits of the mouse data set. Overall, the CE model is a valuable alternative for modeling epistasis since it eliminates undesired properties of EGBLUP and shows convincing results in practice. However, other more realistic parametric structures of effects in between EGBLUP and CE may be of interest for future research. Important steps into this direction have already been made with the "hybrid" coding according to He et al. [12 , 13], in which the marker coding is estimated from the data under the side condition of generating a monotone effect model. Moreover, an interesting approach for future investigation may be the adaption of categorical models to other types of variables, for instance defined by haplotypes.
Incorporating prior experimental information into the coding of EGBLUP
Finally, we demonstrated that marker coding can be used to incorporate prior information. An important property of the procedure we used is that we "decoupled" the effect models for different pairs by allowing to choose the orientation of the parametric model for each pair separately (see "Methods"). In particular, this means that marker j might be coded as 0,1,2 in combination with marker k, but as −2,−1,0 in combination with marker l. The criterion to decide which coding to use, was simple here by comparing the size of the absolute interaction effect of a pair when different "orientations" were used. Note here that the improvement of prediction accuracy was smaller than by means of variable selection on the wheat data set [11]. The relatively small improvement might be a result of only giving the two possibilities of both markers being in the initial coding or both markers with inverted coding, but not choosing from all possible four orientations. We used this simplified procedure, since for other combinations of one marker with original coding and the other marker with inverted coding, the assigned effect will also depend on the orientation of other pairs and thus it is difficult to determine which orientation to choose if we will additionally change the orientation of other pairs. In this regard, the presented method can be considered as a straightforward ad hoc approach to incorporate prior knowledge into the coding, capturing some part of the covariance structure of the given data and thus improving the predicitve ability on data sets with similar covariance structure.
We illustrated that the EGBLUP model possesses several undesired properties caused by the interactions being modeled by products of marker values. We showed that the symmetrically coded EGBLUP tends to perform best, that the allele frequency standardized version tends to have the lowest predicitve ability and that the CE model can be an attractive alternative to EGBLUP. Prior information from other experiments can be incorporated into the marker coding of EGBLUP, which gives the potential to enhance predictive ability for correlated traits.
1 In literature, the expression GBLUP is used for the reformulated equivalent of Eq. (1) with genetic value g:=M β and thus \(\mathbf {g}\sim \mathcal {N}(0,\sigma ^{2}_{\beta } \mathbf {MM}')\).
CM:
Categorical marker effect model
CE:
Categorical epistais model
DArT:
Diversity Arrays Technology
EGBLUP:
Extended genomic best linear unbiased prediction
GBLUP:
Genomic best linear unbiased prediction
MAF:
Minor allele frequency
SNP:
Meuwissen T, Hayes B, Goddard M. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001; 157(4):1819–29.
Hayes BJ, Visscher PM, Goddard ME. Increased accuracy of artificial selection by using the realized relationship matrix. Genet Res. 2009; 91(01):47–60.
Abraham G, Tye-Din JA, Bhalala OG, Kowalczyk A, Zobel J, Inouye M. Accurate and robust genomic prediction of celiac disease using statistical learning. PLoS Genet. 2014; 10(2):1004137.
Henderson CR. Best linear unbiased estimation and prediction under a selection model. Biometrics. 1975; 31(2):423–47.
Habier D, Fernando R, Dekkers J. The impact of genetic relationship information on genome-assisted breeding values. Genetics. 2007; 177(4):2389–97.
VanRaden P. Efficient methods to compute genomic predictions. J Dairy Sci. 2008; 91(11):4414–23.
Piepho HP. Ridge regression and extensions for genomewide selection in maize. Crop Sci. 2009; 49(4):1165–76.
Albrecht T, Wimmer V, Auinger HJ, Erbe M, Knaak C, Ouzunova M, Simianer H, Schön CC. Genome-based prediction of testcross values in maize. Theor Appl Genet. 2011; 123(2):339–50.
Strandén I, Christensen OF. Allele coding in genomic evaluation. Genet Sel Evol. 2011; 43(25):1–11. http://www.gsejournal.org/content/43/1/25.
Jiang Y, Reif JC. Modeling epistasis in genomic selection. Genetics. 2015; 201(2):759–68.
Martini JWR, Wimmer V, Erbe M, Simianer H. Epistasis and covariance: How gene interaction translates into genomic relationship. Theor Appl Genet. 2016; 129(5):963–76.
He D, Wang Z, Parida L. Data-driven encoding for quantitative genetic trait prediction. BMC Bioinformatics. 2015; 16(Suppl 1):10.
He D, Parida L. Does encoding matter? a novel view on the quantitative genetic trait prediction problem. BMC Bioinformatics. 2016; 17(Suppl 9):272.
Falconer DS, Mackay TF, Frankham R. Introduction to quantitative genetics.
Zeng ZB, Wang T, Zou W. Modeling quantitative trait loci and interpretation of models. Genetics. 2005; 169(3):1711–25.
Hallgrímsdóttir IB, Yuster DS. A complete classification of epistatic two-locus models. BMC Genet. 2008; 9(1):17.
Hu Z, Li Y, Song X, Han Y, Cai X, Xu S, Li W. Genomic value prediction for quantitative traits under the epistatic model. BMC Genet. 2011; 12(1):15.
Mackay TF. Epistasis and quantitative traits: using model organisms to study gene-gene interactions. Nat Rev Genet. 2014; 15(1):22–33.
Wang D, El-Basyoni IS, Baenziger PS, Crossa J, Eskridge K, Dweikat I. Prediction of genetic values of quantitative traits with epistatic effects in plant breeding populations. Heredity. 2012; 109(5):313–9.
Sargolzaei M, Schenkel FS. QMSim: a large-scale genome simulator for livestock. Bioinformatics. 2009; 25(5):680–1.
Crossa J, de Los Campos G, Pérez P, Gianola D, Burgueno J, Araus JL, Makumbi D, Singh RP, Dreisigacker S, Yan J, Arief V, Banziger M, HJ B. Prediction of genetic values of quantitative traits in plant breeding using pedigree and molecular markers. Genetics. 2010; 186(2):713–24.
Solberg LC, Valdar W, Gauguier D, Nunez G, Taylor A, Burnett S, Arboledas-Hita C, Hernandez-Pliego P, Davidson S, Burns P, et al.A protocol for high-throughput phenotyping, suitable for quantitative trait analysis in mice. Mamm Genome. 2006; 17(2):129–46.
Valdar W, Solberg LC, Gauguier D, Cookson WO, Rawlins JNP, Mott R, Flint J. Genetic and environmental effects on complex traits in mice. Genetics. 2006; 174(2):959–84.
Durinck S, Spellman PT, Birney E, Huber W. Mappingidentifiers for the integration of genomic datasets with the r/bioconductor package biomart. Nat Protoc. 2009; 4(8):1184–1191.
Durinck S, Moreau Y, Kasprzyk A, Davis S, De Moor B, Brazma A, Huber W. Biomart and bioconductor: a powerful link between biological databases and microarray data analysis. Bioinformatics. 2005; 21(16):3439–440.
Wimmer V, Albrecht T, Auinger HJ, Schoen CC. synbreed: a framework for the analysis of genomic prediction data using R. Bioinformatics. 2012; 28(15):2086–7.
Akdemir D, Godfrey OU. EMMREML: Fitting Mixed Models with Known Covariance Structures. 2015. R package version 3.1. http://CRAN.R-project.org/package=EMMREML.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2014. http://www.R-project.org/.
Ober U, Huang W, Magwire M, Schlather M, Simianer H, Mackay TF. Accounting for genetic architecture improves sequence based genomic prediction for a drosophila fitness trait. PloS ONE. 2015; 10(5):1–17: e0126880. doi:10.1371/journal.pone.0126880.
Zhang Z, Ober U, Erbe M, Zhang H, Gao N, He J, Li J, Simianer H. Improving the accuracy of whole genome prediction for complex traits using the results of genome wide association studies. PloS ONE. 2014; 9(3):93017.
Gianola D, Morota G, Crossa J. Genome-enabled prediction of complex traits with kernel methods: What have we learned? In: Proceedings of the 10th World Congress of Genetics Applied to Livestock Production. Vancouver, BC, Canada: 2014. https://asas.confex.com/asas/WCGALP14/webprogram/Paper10331.html.
Long N, Gianola D, Rosa GJ, Weigel KA. Marker-assisted prediction of non-additive genetic values. Genetica. 2011; 139(7):843–54.
Su G, Christensen OF, Ostersen T, Henryon M, Lund MS. Estimating additive and non-additive genetic variances and predicting genetic merits using genome-wide dense single nucleotide polymorphism markers. PloS ONE. 2012; 7(9):45293.
JWRM thanks Maria Emilia Barreyro for helpful discussions.
We acknowledge support by the Open Access Publication Funds of the Göttingen University. JWRM thanks KWS SAAT SE for financial support. NG thanks the China Scholarship Council (CSC) for financial support. RJCC was supported by grants FONCyT PICT 2013-1661, UBACyT 20020150100230B/ 2016 and PIP CONICET 833/2013, from Argentina.
The simulated data, the filtered and imputed genotypes of the mouse data and the corrected phenotypes can be found in Additional file 1. The raw mouse data and a detailed description of the data can be found at the corresponding UCL website (at the moment http://mtweb.cs.ucl.ac.uk/mus/www/mouse/HS/index.shtmland http://mtweb.cs.ucl.ac.uk/mus/www/GSCAN/). The wheat data is offered by the corresponding publication. See also the "Methods" section for more details.
JWRM: Wrote the manuscript, derived the theoretical proofs of the statements, proposed to consider the topic; proposed and programmed the algorithm to adapt the coding to given data; analyzed the data; NG: supported the data analysis; prepared the mouse data set; parallelized the presented algorithm to adapt the coding to given data; tested the models on different data sets and with different validation methods; DFC: supported the data analysis; reevaluated the results with different prediction pipelines; simulated the genotypes with the QMSim software. VW, ME, RJCC, HS: guided the research. All authors have read and approved the final version of the manuscript.
Department of Animal Sciences, Georg-August University, Albrecht Thaer-Weg 3, Göttingen, Germany
Johannes W. R. Martini, Ning Gao, Diercles F. Cardoso, Malena Erbe & Henner Simianer
National Engineering Research Center for Breeding Swine Industry, Guangdong Provincial Key Lab of Agro-animal Genomics and Molecular Breeding, College of Animal Science, South China Agricultural University, Guangzhou, China
Ning Gao
Departamento de Zootecnia, São Paulo State University, São Paulo, Brazil
Diercles F. Cardoso
KWS SAAT SE, Einbeck, Germany
Valentin Wimmer
Institute for Animal Breeding, Bavarian State Research Centre for Agriculture, Grub, Germany
Malena Erbe
Department of Animal Production, University of Buenos Aires, INPA-CONICET, Buenos Aires, Argentina
Rodolfo J. C. Cantet
Johannes W. R. Martini
Henner Simianer
Correspondence to Johannes W. R. Martini.
Rdata-file with two lists. The list "Mouse_Data" contains a genotype matrix of 1298 individuals and 9265 markers as well as a matrix with records of 13 traits of the individuals. The list "Simulated_Data" offers the genotypes and phenotypes of the 20 simulations. Each entry of this list is a list of two elements representing genotypes and phenotypes of the respective simulation. Genotypes are given by a matrix of 1000 individuals with 9000 markers. Phenotypes are provided as a data.frame of the 1000 individuals and the 9 different phenotypes described in the Methods section. (RDATA 64512 kb)
The file presents mathematical arguments for the statements on the properties of the models, which have been made in the main text. (PDF 149 kb)
Martini, J.W.R., Gao, N., Cardoso, D.F. et al. Genomic prediction with epistasis models: on the marker-coding-dependent performance of the extended GBLUP and properties of the categorical epistasis model (CE). BMC Bioinformatics 18, 3 (2017). https://doi.org/10.1186/s12859-016-1439-1
Epistasis model
Results and data | CommonCrawl |
Normal form
Any equivalence relation $\sim$ on a set of objects $\mathscr M$ defines the quotient set $\mathscr M/\sim$ whose elements are equivalence classes: the equivalence class of an element $M\in\mathscr M$ is denoted $[M]=\{M'\in\mathscr M:~M'\sim M\}$. Description of the quotient set is referred to as the classification problem for $\mathscr M$ with respect to the equivalence relation. The normal form of an object $M$ is a "selected representative" from the class $[M]$, usually possessing some nice properties (simplicity, integrability etc). Often (although not always) one requires that two distinct representatives ("normal forms") are not equivalent to each other: $M_1\ne M_2\iff M_1\not\sim M_2$.
The equivalence $\sim$ can be an identical transformation in a certain formal system: the respective normal form in such case is a "canonical representative" among many possibilities, see, e.g., disjunctive normal form and conjunctive normal form for Boolean functions.
However, the most typical classification problems appear when there is a group $G$ acting on $\mathscr M$: then the natural equivalence relation arises, $M_1\sim M_2\iff \exists g\in G:~g\cdot M_1=M_2$. If both $\mathscr M$ and $G$ are finite-dimensional spaces, the classification problem is usually much easier than in the case of infinite-dimensional spaces.
Below follows a list (very partial) of the most important classification problems in which normal forms are known and very useful. For more detailed description of specific cases, follow the links indicated in the appropriate subsections.
1 Finite-dimensional classification problems
1.1 Linear maps between finite-dimensional linear spaces
1.2 Linear operators (self-maps)
1.3 Quadratic forms on linear spaces
1.4 Quadratic forms on Euclidean spaces
1.5 Quadratic forms on the symplectic spaces
1.6 Conic sections in the real affine and projective plane
1.7 Families of finite-dimensional objects
2 Singularities of differentiable mappings
2.1 Maps of full rank
2.2 Germs of maps in small dimension
2.2.1 Holomorphic curves
2.2.2 Nondegenerate critical points of functions and the Morse lemma
2.2.3 Degenerate critical points of smooth functions
2.2.4 "Elementary catastrophes"
3 Classification of dynamical systems
4 References and basic literature
Finite-dimensional classification problems
When the objects of classification form a finite-dimensional variety, in most cases it is a subvariety of matrices, with the equivalence relation induced by transformations reflecting the change of basis.
Linear maps between finite-dimensional linear spaces
Let $\Bbbk$ be a field. A linear map from $\Bbbk^m$ to $\Bbbk^n$ is represented by an $n\times m$ matrix over $\Bbbk$ ($m$ rows and $n$ columns). A different choice of bases in the source and the target space results in a matrix $M$ being replaced by another matrix $M'=HML$, where $H$ (resp., $L$) is an invertible $m\times m$ (resp., $n\times n$) matrix of transition between the bases, $$ M\sim M'\iff\exists H\in\operatorname{GL}(m,\Bbbk),\ L\in \operatorname{GL}(n,\Bbbk):\quad M'=HML. \tag{LR} $$
Obviously, this binary relation $\sim$ is an equivalence (symmetric, reflexive and transitive), called left-right linear equivalence. Each matrix $M$ is left-right equivalent to a matrix (of the same size) with $k\leqslant\min(n,m)$ units on the diagonal and zeros everywhere else. The number $k$ is a complete invariant of equivalence (matrices of different ranks are not equivalent) and is called the rank of a matrix.
A similar question may be posed about homomorphisms of finitely generated modules over rings. For some rings the normal form is known as the Smith normal form.
Linear operators (self-maps)
The matrix of a linear operator of an $n$-dimensional space over $\Bbbk$ into itself is transformed by a change of basis in a more restrictive way compared to (LR): if the source and the target spaces coincide, then necessarily $n=m$ and $L=H^{-1}$. The corresponding equivalence is called similarity (sometimes conjugacy or linear conjugacy) of matrices, and the normal form is known as the Jordan normal form, see also here. This normal form is characterized by a specific block diagonal structure and explicitly features the eigenvalues on the diagonal. Note that this form holds only over an algebraically closed field $\Bbbk$, e.g., $\Bbbk=\CC$.
Quadratic forms on linear spaces
A quadratic form $Q\colon\Bbbk^n\to\Bbbk$, $(x_1,\dots,x_n)\mapsto \sum a_{i,j}^n a_{ij}x_ix_j$ with a symmetric matrix $Q$ after a linear invertible change of coordinates will have a new matrix $Q'=HQH^*$ (the asterisk means the transpose): $$ Q'\sim Q\iff \exists H\in\operatorname{GL}(n,\Bbbk):\ Q'=HQH^*.\tag{QL} $$ The normal form for this equivalence, termed matrix congruence, is diagonal, but the diagonal entries depend on the field:
Over $\RR$, the diagonal entries can be all made $0$ or $\pm 1$. The signature gives the number of entries of each type: by Sylvester's law of inertia it is an invariant of classification.
Over $\CC$, one can keep only zeros and units (not signed). The number of units is called the rank of a quadratic form; it is a complete invariant.
Quadratic forms on Euclidean spaces
This classification deals with real symmetric matrices representing quadratic forms, yet the condition (QL) is represented by a more restrictive condition that the conjugacy matrix $H$ is orthogonal (preserves the Euclidean scalar product): $$ Q'\sim Q\iff \exists H\in\operatorname{O}(n,\RR)=\{H\in\operatorname{GL}(n,\RR):\ HH^*=E\}:\ Q'=HQH^*.\tag{QE} $$ The normal form is diagonal, with the diagonal entries forming a complete system of invariants.
A similar set of normal forms exists for self-adjoint matrices conjugated by Hermitian matrices.
Quadratic forms on the symplectic spaces
A symplectic space is an even-dimensional space $\R^{2n}$ equipped with the linear symplectic structure, a nondegenerate bilinear form denoted by the brackets $[\cdot,\cdot]\to\R$, which is antisymmetric: $[v,w]=-[w,v]$ for any $v,w\in\RR^{2n}$, [Ar74, Sect. 41]. Any such form can be brought into the normal form with the matrix $$ [e_i,e_j]=[e'_i,e'_j]=0,\qquad [e_i,e'_j]=\begin{cases}1,\quad &i=j,\\0,&i\ne j,\end{cases}\qquad \forall i,j=1,\dots,n. $$ for a suitable basis $\{e_1,\dots,e_n,e'_1,\dots,e'_n\}$ in $\R^{2n}$. If $\R^{2n}$ is equipped with the standard Euclidean structure (in which the above basis is orthonormal), then the symplectic form is generated by a linear operator $I$, $$ [v,w]=(Iv,w),\qquad I=\begin{pmatrix} 0_n&-E_n\\E_n&0_n\end{pmatrix},\quad I=-I^*,\ I^2=-E_{2n}. $$ Here $0_n$ and $E_n$ denote the zero and identity matrices of size $n\times n$ and the asterisk denotes the transposition.
A linear self-map $M:\R^{2n}\to\R^{2n}$ is called canonical, or a symplectomorphism, if it preserves the symplectic structure, $[Mv,Mw]=[v,w]$ for any $v,w$. Linear symplectomorphisms form a finite-dimensional Lie group called the symplectic group and denoted by $\operatorname{Sp}(2n,\R)$ (fields other than $\R$ can also be considered). The matrix of a symplectomorphism in the canonical basis satisfies the condition $M^*IM=I$. The characteristic polynomial $p$ of a symplectic matrix is palindromic, i.e., $\lambda^{2n}p(1/\lambda)=p(\lambda)$.
Two (symmetric) quadratic forms $\tfrac12(Qx,x)$ and $\tfrac12(Q'x,x)$ on the symplectic $\R^{2n}$ with symmetric $2n\times 2n$-matrices are called canonically equivalent, if there exists a canonical transformation $M$ conjugating them, $M^*QM=Q'$. The canonical equivalence preserves the Hamiltonian form of equations and hence conjugates also the Hamiltonian linear vector fields $v(x)=IQx$ and $v'=IQ'x$: $M^{-1}IQM=IQ'$.
The eigenvalues of a real matrix $A=IQ$ with $Q^*=Q$ are symmetric both with respect to real axis and to the change of sign, hence if nonzero, they come in pairs (real $\pm a$ or imaginary $\pm i\omega$), quadruples $\pm a\pm i\omega$. The Jordan block structure is the same for all eigenvalues in the pair (quadruple). In the simplest case when all Jordan blocks are trivial, the quadratic form $Q$ can be brought by a canonical transformation to the sum of terms of the three types[1] $$ Q_{\pm a}=-a(x_iy_i),\qquad, Q_{\pm i\omega}=\pm\tfrac12(\omega^2x_i^2+y_i^2),\qquad Q_{4}=-a(x_iy_i+x_{i+1}y_{i+1})+\omega^2(x_iy_{i+1}-x_{i+1}y_i) $$ in the canonical coordinates $(x_1,\dots,x_n,y_1,\dots,y_n)$. In the case the operator $IQ$ has nontrivial Jordan blocks, the complete list of normal forms is known but rather complicated [Ar74, Appendix 6].
↑ The terms of type $Q_{\pm i\omega}$ with different signs are not equivalent.
Conic sections in the real affine and projective plane
This problem reduces to classification of quadratic forms on $\RR^3$. An conic section is the intersection of the cone $\{Q(x,y,z)=0\}$ defined by a quadratic form on $\RR^3$, with the affine subspace $\{z=1\}$. Projective transformations are defined by linear invertible self-maps of $\RR^3$, respectively, the affine transformations consist of linear self-maps preserving the plane $\{z=0\}$ in the homogeneous coordinates (the "infinite line"). In addition, one can replace the form $Q$ by $\lambda Q$ with $\lambda\ne 0$. This defines two equivalence relations on the space of quadratic forms.
The list of normal forms for both classifications is follows from the normal form of quadratic forms:
Rank of $Q$
Projective curves
Affine curves
3 $\varnothing_1=\{x^2+y^2=-1\}$, circle $\{x^2+y^2=1\}$ $\varnothing_1=\{x^2+y^2=-1\}$, circle $\{x^2+y^2=1\}$, parabola $\{y=x^2\}$, hyperbola $\{x^2-y^2=1\}$
2 point $\{x^2+y^2=0\}$, two lines $\{x^2-y^2=0\}$ point $\{x^2+y^2=0\}$, two crossing lines $\{x^2-y^2=0\}$,
two parallel lines $\{x^2=1\}$, $\varnothing_2=\{x^2=-1\}$
1 "double" line $\{x^2=0\}$ $\varnothing_3=\{1=0\}$, "double" line $\{x^2=0\}$
Note that the three empty sets $\varnothing_i$, are different from the algebraic standpoint: $\varnothing_1$ is an imaginary cicrle, $\varnothing_2$ is a pair of parallel imaginary lines which intersect "at infinity" (if these imaginary lines intersect at a finite point, this point is real), and $\varnothing_3$ is a double line "at infinity".
Families of finite-dimensional objects
$\def\l{\lambda}$ In each of the above problems one can instead of an individual map $M$ (or a form $Q$) consider a local parametric family of objects $\{M_\lambda\}$, depending regularly (continuously, $C^k$- or $C^\infty$-differentiably, holomorphically) on finitely many real or complex parameters $\lambda$ varying near a certain point $a$ in the parameter space, $\l\in(\RR^p,a)$ or $\l\in(\CC^p,0)$ respectively. Two such local families $M_\lambda$ and $M'_\lambda$ are said to be equivalent by the action of a group $G$, if there exists a local parametric family of group elements, $\{g_\lambda\}$, also regular (although perhaps in a weaker or just different sense) that conjugates the two families: $g_\lambda\cdot M_\lambda=M_\lambda$ for all admissible values of $\lambda$.
The most instructive example is that of families of linear operators. A "generic" operator $M=M_0$ is diagonalizable with pairwise different eigenvalues $\mu_1(\lambda),\dots,\mu_n(\lambda)$ (depending, naturally, on $\lambda$). One can show that any finite-parametric family $\{M_\lambda|\lambda\in(\RR^p,0)\}$ can be diagonalized by a transformation $M_\lambda\mapsto H_\lambda M_\lambda H_\lambda^{-1}$ by the similarity transformation depending on $\l\in(\RR^p,0)$ with the same regularity. This follows from the Implicit function theorem.
However, when some of the eigenvalues tend to a collision $\mu_i(0)=\mu_j(0)$, the diagonalizing transformation $H_\lambda$ may tend to a degenerate matrix so that $H_\lambda^{-1}$ diverges to infinity, while the transformation of a matrix to its Jordan normal form is far away from the family $\{H_\lambda\}$. However, a different choice of the normal form resolves these problems.
Example. Assume that the local family of matrices $\{M_\l|\l\in(\RR^p,0)\}$ is a deformation of the matrix $M_0$ whose normal form is a single Jordan block of size $n$. Then there exists a family of invertible matrices $\{H_\l|\l\in(\RR^p,0)\}$ such that $$ H_\l M_\l H_\l^{-1}= \begin{pmatrix} \mu & 1&\\ &\mu& 1&\\ &&\mu&1&\\ &&&\ddots&\ddots\\ &&&&\mu&1\\ \alpha_1&\alpha_2&\alpha_3&\cdots&\alpha_{n-1}&\alpha_n \end{pmatrix},\tag{SF} $$ where $\mu=\mu(\l)$ and $\alpha_i=\alpha_i(\l)$, $i=1,\dots,n$ are regular (continuous, smooth, analytic,\dots) functions of the parameters $\l\in(\RR^p,0)$ of the same class as the initial family $\{M_\l\}$.
The normal form (SF) is called the Sylvester form, or sometimes the companion matrix. It is closely related to the transformation reducing a higher order linear ordinary differential equation to the system of first order equations, cf. here.
Deformation of a matrix which consists of several Jordan blocks with different eigenvalues can be reduced to a finite parameter normal form which involves $d$ constants which will depend regularly on $\l$, with $$ d=\sum_\mu (\nu_1(\mu)+3\nu_2(\mu)+5\nu_3(\mu)+\cdots). $$ Hhere $\nu_1(\mu)\geqslant n_2(\mu)\geqslant \nu_3(\mu)\geqslant\cdots~$ are the sizes of the Jordan blocks of $M_0$ with the same eigenvalue $\mu$ (arranged in the non-increasing order), and the summation is extended over all different eigenvalues of the matrix $M_0$ [A71, Theorem 4.4.].
For a systematic exposition of this subject, see [A83, Sect. 29, 30]. Normal forms for parametric families of objects (mainly dynamical systems) belong to the area of responsibility of the bifurcation theory.
Singularities of differentiable mappings
This area refers to classification of (germs of) maps $(\RR^m,0)\to(\RR^n,0)$, which constitute an infinite-dimensional space, with respect to the left-right equivalence: two germs $f,f':(\RR^m,0)\to(\RR^n,0)$ are equivalent, if there exist two germs of diffeomorphisms $h:(\RR^m,0)\to(\RR^m,0)$ and $g:(\RR^n,0)\to(\RR^n,0)$ such that $f=g^{-1}\circ f\circ h$. This left-right action corresponds to a change of local coordinates near the source and target points.
One can consider several parallel flavors of the classification theory:
holomorphic (or real analytic), when both the germ $f$ and the conjugacies $g,h$ are assumed/required to be sums of the convergent Taylor series;
smooth, more precisely, $C^\infty$-smooth;
formal theory, where all objects are represented by formal Taylor series without any assumptions on their convergence.
However, for the left-right classification, the three classifications usually coincide. In particular, if two holomorphic germs are conjugated by a pair of formal self-maps, then they also can be conjugated by a pair of holomorphic self-maps. If two $C^\infty$ germs are formally conjugated, then they are also $C^\infty$ conjugated, etc. The finite smoothness category is not as developed as the three flavors above: one could expect that the differentiability class of the conjugacies will in general be lower than that of the maps, but the sharp estimates are mostly unknown.
For more detailed exposition see Singularities of differentiable mappings. Here we give only a brief summary of available results.
Maps of full rank
With each smooth germ $f:(\RR^m,0)\to(\RR^n,0)$ one can associate a linear map $M:\RR^m\to\RR^n$ which is the linearization of $f$ ($M$ is also called the tangent map to $f$, the Jacobian matrix or the differential of $f$ at the origin). In coordinates one can write this as follows, $$ \forall x\in (\RR^m,0)\quad f(x)=Mx+\phi(x)\in (\RR^n,0),\qquad M=\biggl(\frac{\partial f_i}{\partial x_j}(0)\biggr)_{\!\!\substack{i=1,\dots,n \\ j=1,\dots, m}},\quad \|\phi(x)\|=O(\|x\|^2). $$
If the operator $M$ has the full rank, then $f$ is right-left equivalent to the linear germ $g'(x)=Mx$ [GG, Corollaries 2.5, 2.6].
These assumptions hold in two cases: where $m\le n$ and $M$ is injective, and where $m\ge n$ and $M$ is surjective. The conclusion reduces the classification of nonlinear germs to that of linear maps, which was already discussed earlier.
This result is equivalent to the Implicit function theorem. In particular, it shows that the image of an immersion locally looks like a coordinate subspace, and the preimages of points by a submersion locally look like a family of parallel affine subspaces of the appropriate dimension.
The obvious reformulation of this theorem is valid also for real-analytic and complex holomoprhic germs.
Germs of maps in small dimension
When the rank condition fails, the normal form is nonlinear and is known in small dimensions. The corresponding theory is known by the name Singularity theory of differential maps, or the Catastrophe theory.
The classification is organized along a tree: the normal forms depend on the rank of the Jacobian matrix, but also on some relationships between higher order Taylor coefficients of $f$ at the origin, introducing deeper and deeper degeneracy. Each such set of conditions is characterized by its codimension, the number of algebraically independent conditions imposed on the initial segment of the Taylor series of $f$ (in the invariant terms, on the jet of $f$). By the Thom's Transversality theorem, singularities of codimension $k$ and higher generically do not occur in generic families of maps involving less than $k$ parameters.
Holomorphic curves
A nonconstant holomorphic (or real analytic) germ $f:(\C^1,0)\to(\C^1,0)$ is biholomorphically left-right equivalent to the monomial map $g:z\mapsto z^\mu$, $\mu\in\NN$; the number $\mu=1$ corresponds to a full rank map and the normal form is linear, for $\mu>1$ nonlinear. The list of simple normal forms for holomorphic curves $f:(\CC,0)\to(\CC^2,0)$ consists[1] of 6 different series, of which the simplest two are $$ A_{2k}:\ t\mapsto (t^2,t^{2k+1}),\qquad E_{6k}:\ t\mapsto (t^3, t^{3k+1}+\delta t^{3k+p+2}), 0\leqslant p\leqslant k-s,\ \delta\in\{0,1\}. $$
↑ J. W. Bruce, T. J. Gaffney, Simple singularities of mappings $\CC,0$ to $\CC^2,0$, J. London Math. Soc. 26 (1982):3, 465-474, doi:10.1112/jlms/s2-26.3.465, MR0684560.
Nondegenerate critical points of functions and the Morse lemma
A smooth map $f:(\RR^n,0)\to(\RR,0)$ which is not of the full rank, has a critical point at the origin: $\rd f(0)=0$. In this case the quadratic approximation $Q:\RR^n\to\RR$, $(x_1,\dots,x_n)\mapsto\sum_{i,j=1}^n q_{ij}x_ix_j$ provided by the Hessian matrix $\rd ^2f(0)=\|q_{ij}\|$, $q_{ij}=\frac{\partial^2 f}{\partial x_i\partial x_j}(0)$, is the normal form for the left-right equivalence, assuming that the rank of this form is full. This assertion is famous under the name of the Morse lemma [M], [AVG]: $$ \rd f(0)=0,\ \operatorname{rank}\rd^2 f(0)=n\implies f(x)\sim Q(x). $$ The known classification of quadratic forms allows to bring $f(x)$ to the normal form $f(x)=x_1^2+\cdots+x_k^2-x_{k+1}^2-\cdots-x_n^2$. It is worth mentioning that one can transform a germ to its normal form by applying the change of variables in the source only: change of the variable in the target space is unnecessary for critical points.
Degenerate critical points of smooth functions
If the critical point of a function is degenerate and its corank $\delta=\operatorname{corank}Q=n-\operatorname{rank}Q>0$, the normal forms become more complicated, although the initial steps are still simple.
If $\delta=1$, then the classification reduces to that of (smooth or analytic) functions of one variable. Except for an "infinitely degenerate" subcase, a function with Hessian of corank 1 can be brought to the normal form denoted by "class $A_\mu$": $$ \rd f(0)=0,\ \operatorname{corank} \rd^2f(0)=1\implies f\sim x_1^{\mu+1}+\sum_{k=2}^n \pm x_k^2. $$ Singularities of corank $\delta\geqslant 2$ and small codimension also have polynomial normal forms. Among these one has to distinguish simple singularities (of critical points of functions), which appear in two series and three exceptional cases. Apart from the series $A_\mu$ mentioned above, the other series, denoted by $D_\mu$, has the normal form $$ f(x)\sim x_1^{\mu-1}+x_1x_2^2+\sum_{k=3}^n \pm x_k^2,\qquad \mu=4,5,\dots. $$ The three exceptional simple singularities also occur for the $\operatorname{corank} \rd^2f(0)=2$ and have the normal form (we omit for simplicity the quadratic Morse part) as follows: $$ E_6:\ x^3+y^4,\qquad E_7:\ x^3+xy^3,\qquad E_8:\ x^3+y^5. $$ This classification is intimately linked to the classification of simple Lie algebras [1][2].
More degenerate critical points can (to some extent) be reduced to polynomial normal forms involving one or more real parameters (thus the number of different non-equivalent critical points becomes infinite), see hundreds of cases in [AVG, Ch. II, Sect. 16-17]. Further degeneracy requires normal forms involving arbitrary functions, even more increasing the "size" of the lists.
↑ V. I. Arnold, Normal forms of functions near degenerate critical points, the Weyl groups $A_k,D_k,E_k$ and Lagrangian singularities, Funct. Anal. Appl. 6 (1972), no. 4, 3–25, MR0356124
↑ M. Entov, On the $ADE$-classification of the simple singularities of functions, Bull. Sci. Math. 121 (1997), no. 1, 37–60, MR1431099
"Elementary catastrophes"
Smooth germs between two different spaces $f:(\RR^2,0)\to(\RR^2,0)$ have polynomial normal forms for the case $\operatorname{rank}\rd f(0)<2$, if the higher order terms are not too degenerate. The rank condition means that the determinant (Jacobian) $\det \rd f(x)$ vanishes on a curve $\varSigma\subseteq(\RR^2,0)$ passing through the origin. The curve $\varSigma$, called the discriminant (the critical locus of $f$) is generically smooth at the origin and has a tangent line $\ell=T_0\varSigma\subseteq T_0\RR^2$. Position of this line can be compared with another line $\ell'=\operatorname{Ker} \rd f(0)\subseteq T_0\RR^2$.
If the two lines are transversal (cross each other by a nonzero angle), $T_0\varSigma\pitchfork \operatorname{Ker}\rd f(0)$, then the corresponding singular point is called fold and is right-left equivalent to the quadratic map $$ f:\begin{pmatrix}x\\y\end{pmatrix}\mapsto \begin{pmatrix}u\\v\end{pmatrix}=\begin{pmatrix}x^2\\y\end{pmatrix}. $$ This map is a two-fold cover of the right half-plane $\{u\geqslant0\}$ in the targer plane. The line $\{u=0\}$ is the visible contour of the map.
If the two lines coincide, one needs an additional nondegeneracy assumption[1], yet under this condition the singular point is called cuspidal singularity and is right-left equivalent to the cubic map $$ f:\begin{pmatrix}x\\y\end{pmatrix}\mapsto \begin{pmatrix}u\\v\end{pmatrix}=\begin{pmatrix}xy+x^3\\y\end{pmatrix}. $$ The image of the curve $\varSigma$, the visible contour of the map, is a semicubic parabola $4u^2-9v^3=0$, also referred to as the cusp. For the detailed exposition see [GG, Ch. VI, Sect. 2].
↑ The angle between the directions $\ell$ and $\ell'$, measured along the curve $\varSigma$, should have a simple root at the origin.
Classification of dynamical systems
Main page: Local normal forms for dynamical systems.
This is the classification of (usually invertible) self-maps $f\:(\RR^n,0)\to(\RR^m,0)$ with two self-maps $f,f'$ considered as equivalent if there exists a germ of diffeomorphism $h:(\RR^n,0)\to(\RR^n,0)$ such that $f'=h^{-1}\circ f\circ h$. This equivalence respects iteration, i.e., extends as the equivalence of cyclic subgroups of $\operatorname{Diff}(\RR^n,0)$: $$ f\sim f'\iff \underbrace{f\circ \cdots\circ f}_{n\text{ times}} \sim \underbrace{f'\circ \cdots\circ f'}_{n\text{ times}}\qquad\forall n=1,2,\dots $$ Such subgroups are naturally identified with discrete-time dynamical systems. A closely related classification of one-parametric subgroups $\{f^t:t\in\RR,\ f^{t+s}=f^t\circ f^s\}\subseteq\operatorname{Diff}(\RR^n,0)$ reduces to classification of germs of vector fields with a singular point at the origin[1]. Two vector fields $v,v'$ are called equivalent, if there exists a diffeomorphism $h$ as above, such that $$ h_*v=v'\circ h,\qquad h_*=\biggl(\frac{\partial h}{\partial x}\biggr)(0)=\rd f(0). $$
As with the left-right equivalence of maps, one could first attempt to conjugate a vector field $v$ (or a self-map $f$) to its linear part $A=\rd v(0)$ (resp., $M=\rd f(0)$) and reduce the classification to that of linear operators. However, unlike the previous theory, the possibility of such linearization depends very strongly on the arithmetic nature of eigenvalues of $A$ (resp., $M$), in particular, on the presence of resonances between them.
Besides, again in contrast with the left-right classification, the three parallel theories (analytic, $C^\infty$-smooth and formal) in the case of dynamic equivalence differ very much: the unavoidable divergence of the formal series conjugating an object with its normal form is a typical phenomenon.
↑ The vector field generating a one-parametric group (the flow of this field) is defined as the velocity, $v(x)=\left.\frac{\rd}{\rd t}\right|_{t=0}f^t(x)$
In addition to the above "general" theory, one can consider maps (and conjugacy) preserving various additional structures.
For instance, an even-dimension neighborhood $(\RR^{2n},0)$ can be equipped by the standard symplectic structure $\omega=\sum_{i=1}^n \rd x_i\land\rd y_i$. Then with any germ of a smooth critical function $H$ (Hamiltonian) one can associate the Hamiltonian vector field $v_H$ uniquely defined by the identity $\rd H=\omega(v_H,\cdot)$ between 1-forms. The equivalence relation rather naturally requires the conjugating diffeomorphism $h$ be canonic, i.e., preserve the symplectic structure: $h^*\omega=\omega$. The corresponding classification theory is important for the Hamiltonian dynamical systems. As with the "general" theory, the answers depend on the arithmetical properties of the eigenvalues of the linearization, with resonances (defined in a slightly different way) to play the central role.
Another important structure is that of a vector bundle. Consider vector fields on $(\CC^{n+1},0)$ which are linear in the last $n$ coordinates and are fibered over the 1-dimensional base: such a vector field can be always written (after a suitable change of coordinates) under the form $$ \dot x=x^{\mu+1},\quad \dot y=A(x)y,\qquad \mu\in\ZZ_+,\ A(x)=A_0+xA_1+x^2A_2+\cdots\text{ a holomorphic matrix-valued function}. $$ The natural equivalence relation on such vector fields is that of gauge equivalence, corresponding to the change of variables $y=H(x)w$ with a holomorphic invertible matrix function $H(\cdot)$. The corresponding classification differs substantially for $\mu=0$ (Fuchsian singularities) where formal normal forms are polynomial and convergent, and $\mu>0$ (irregular singularities), where the divergence of the formal transformations is a rule[1], see also Stokes phenomenon and [IY, Sect. 20].
In a different spirit, a possible ramification concerns "dynamical systems with multidimensional time": for such systems one is given a tuple of commuting vector fields $v_1,\dots,v_k$ with $[v_i,v_j]=0$ for all $i,j$ (resp., tuple of commuting self-maps $f_1,\dots,f_k\in\operatorname{Diff}(\R^n,0)$ with $f_i\circ f_j=f_j\circ f_i$ for all $i,j$), and the question is about simultaneous reduction of all fields (resp., germs) to some tuple of normal forms, see [2] and the references therein.
Smooth/holomorphic actions of groups more general than $\ZZ^k$ or $\RR^k$ are usually considered in the framework of the Group theory.
↑ Yu. Ilyashenko, Nonlinear Stokes phenomena, Nonlinear Stokes phenomena, 1–55, Adv. Soviet Math., 14, Amer. Math. Soc., Providence, RI, 1993, MR1206041.
↑ L. Stolovitch, Normalisation holomorphe d'algèbres de type Cartan de champs de vecteurs holomorphes singuliers, Ann. of Math. (2) 161 (2005), no. 2, 589–612, MR2153396
References and basic literature
[sort]
[M] J. W. Milnor, Morse theory. Based on lecture notes by M. Spivak and R. Wells. Annals of Mathematics Studies, No. 51 Princeton University Press, Princeton, N.J. 1963, MR0163331.
[A71] V. I. Arnold, Matrices depending on parameters. Russian Math. Surveys 26 (1971), no. 2, 29--43, MR0301242
[GG] M. Golubitsky, V. Guillemin, Stable mappings and their singularities, Graduate Texts in Mathematics, Vol. 14. Springer-Verlag, New York-Heidelberg, 1973, MR0341518.
[A83] V. I. Arnold, Geometrical methods in the theory of ordinary differential equations. Grundlehren der Mathematischen Wissenschaften, 250. Springer-Verlag, New York-Berlin, 1983, MR0695786
[Ar74] V. I. Arnold, Mathematical methods of classical mechanics. Graduate Texts in Mathematics, 60. Springer-Verlag, New York, 1989. MR1345386
[AVG] V. I. Arnold, S. M. Guseĭn-Zade, A. N. Varchenko, Singularities of differentiable maps, Vol. I, The classification of critical points, caustics and wave fronts. Monographs in Mathematics, 82. Birkhäuser Boston, Inc., Boston, MA, 1985, ISBN: 0-8176-3187-9, MR0777682.
[IY] Yu. Ilyashenko, S. Yakovenko, Lectures on analytic differential equations. Graduate Studies in Mathematics, 86. American Mathematical Society, Providence, RI, 2008 MR2363178
Normal form. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Normal_form&oldid=36204
Retrieved from "https://encyclopediaofmath.org/index.php?title=Normal_form&oldid=36204"
TeX done | CommonCrawl |
SciPost Physics Vol. 8 issue 3 (March 2020)
Anisotropic scaling of the two-dimensional Ising model II: surfaces and boundary fields
Hendrik Hobrecht, Alfred Hucht
SciPost Phys. 8, 032 (2020) · published 2 March 2020 |
Based on the results published recently [SciPost Phys. 7, 026 (2019)], the influence of surfaces and boundary fields are calculated for the ferromagnetic anisotropic square lattice Ising model on finite lattices as well as in the finite-size scaling limit. Starting with the open cylinder, we independently apply boundary fields on both sides which can be either homogeneous or staggered, representing different combinations of boundary conditions. We confirm several predictions from scaling theory, conformal field theory and renormalisation group theory: we explicitly show that anisotropic couplings enter the scaling functions through a generalised aspect ratio, and demonstrate that open and staggered boundary conditions are asymptotically equal in the scaling regime. Furthermore, we examine the emergence of the surface tension due to one antiperiodic boundary in the system in the presence of symmetry breaking boundary fields, again for finite systems as well as in the scaling limit. Finally, we extend our results to the antiferromagnetic Ising model.
Algebraic structure of classical integrability for complex sine-Gordon
Jean Avan, Luc Frappat, Eric Ragoucy
The algebraic structure underlying the classical $r$-matrix formulation of the complex sine-Gordon model is fully elucidated. It is characterized by two matrices $a$ and $s$, components of the $r$ matrix as $r=a-s$. They obey a modified classical reflection/Yang--Baxter set of equations, further deformed by non-abelian dynamical shift terms along the dual Lie algebra $su(2)^*$. The sign shift pattern of this deformation has the signature of the twisted boundary dynamical algebra. Issues related to the quantization of this algebraic structure and the formulation of quantum complex sine-Gordon on those lines are introduced and discussed.
Generalized Gibbs Ensemble and string-charge relations in nested Bethe Ansatz
György Z. Fehér, Balázs Pozsgay
The non-equilibrium steady states of integrable models are believed to be described by the Generalized Gibbs Ensemble (GGE), which involves all local and quasi-local conserved charges of the model. In this work we investigate integrable lattice models solvable by the nested Bethe Ansatz, with group symmetry $SU(N)$, $N\ge 3$. In these models the Bethe Ansatz involves various types of Bethe rapidities corresponding to the "nesting" procedure, describing the internal degrees of freedom for the excitations. We show that a complete set of charges for the GGE can be obtained from the known fusion hierarchy of transfer matrices. The resulting charges are quasi-local in a certain regime in rapidity space, and they completely fix the rapidity distributions of each string type from each nesting level.
Replica Bethe Ansatz solution to the Kardar-Parisi-Zhang equation on the half-line
Alexandre Krajenbrink, Pierre Le Doussal
We consider the Kardar-Parisi-Zhang (KPZ) equation for the stochastic growth of an interface of height $h(x,t)$ on the positive half line with boundary condition $\partial_x h(x,t)|_{x=0}=A$. It is equivalent to a continuum directed polymer (DP) in a random potential in half-space with a wall at $x=0$ either repulsive $A>0$, or attractive $A<0$. We provide an exact solution, using replica Bethe ansatz methods, to two problems which were recently proved to be equivalent [Parekh, arXiv:1901.09449]: the droplet initial condition for arbitrary $A \geqslant -1/2$, and the Brownian initial condition with a drift for $A=+\infty$ (infinite hard wall). We study the height at $x=0$ and obtain (i) at all time the Laplace transform of the distribution of its exponential (ii) at infinite time, its exact probability distribution function (PDF). These are expressed in two equivalent forms, either as a Fredholm Pfaffian with a matrix valued kernel, or as a Fredholm determinant with a scalar kernel. For droplet initial conditions and $A> - \frac{1}{2}$ the large time PDF is the GSE Tracy-Widom distribution. For $A= \frac{1}{2}$, the critical point at which the DP binds to the wall, we obtain the GOE Tracy-Widom distribution. In the critical region, $A+\frac{1}{2} = \epsilon t^{-1/3} \to 0$ with fixed $\epsilon = \mathcal{O}(1)$, we obtain a transition kernel continuously depending on $\epsilon$. Our work extends the results obtained previously for $A=+\infty$, $A=0$ and $A=- \frac{1}{2}$.
Fredholm determinants, full counting statistics and Loschmidt echo for domain wall profiles in one-dimensional free fermionic chains
Oleksandr Gamayun, Oleg Lychkovskiy, Jean-Sébastien Caux
We consider an integrable system of two one-dimensional fermionic chains connected by a link. The hopping constant at the link can be different from that in the bulk. Starting from an initial state in which the left chain is populated while the right is empty, we present time-dependent full counting statistics and the Loschmidt echo in terms of Fredholm determinants. Using this exact representation, we compute the above quantities as well as the current through the link, the shot noise and the entanglement entropy in the large time limit. We find that the physics is strongly affected by the value of the hopping constant at the link. If it is smaller than the hopping constant in the bulk, then a local steady state is established at the link, while in the opposite case all physical quantities studied experience persistent oscillations. In the latter case the frequency of the oscillations is determined by the energy of the bound state and, for the Loschmidt echo, by the bias of chemical potentials.
Front dynamics in the XY chain after local excitations
Viktor Eisler, Florian Maislinger
We study the time evolution of magnetization and entanglement for initial states with local excitations, created upon the ferromagnetic ground state of the XY chain. For excitations corresponding to a single or two well separated domain walls, the magnetization profile has a simple hydrodynamic limit, which has a standard interpretation in terms of quasiparticles. In contrast, for a spin-flip we obtain an interference term, which has to do with the nonlocality of the excitation in the fermionic basis. Surprisingly, for the single domain wall the hydrodynamic limit of the entropy and magnetization profiles are found to be directly related. Furthermore, the entropy profile is additive for the double domain wall, whereas in case of the spin-flip excitation one has a nontrivial behaviour.
Number-resolved imaging of $^{88}$Sr atoms in a long working distance optical tweezer
Niamh Christina Jackson, Ryan Keith Hanley, Matthew Hill, Frédéric Leroux, Charles S. Adams, Matthew Philip Austin Jones
SciPost Phys. 8, 038 (2020) · published 10 March 2020 |
We demonstrate number-resolved detection of individual strontium atoms in a long working distance low numerical aperture (NA = 0.26) tweezer. Using a camera based on single-photon counting technology, we determine the presence of an atom in the tweezer with a fidelity of 0.989(6) (and loss of 0.13(5)) within a 200 $\mu$s imaging time. Adding continuous narrow-line Sisyphus cooling yields similar fidelity, at the expense of much longer imaging times (30 ms). Under these conditions we determine whether the tweezer contains zero, one or two atoms, with a fidelity $>$0.8 in all cases with the high readout speed of the camera enabling real-time monitoring of the number of trapped atoms. Lastly we show that the fidelity can be further improved by using a pulsed cooling/imaging scheme that reduces the effect of camera dark noise.
Decaying quantum turbulence in a two-dimensional Bose-Einstein condensate at finite temperature
Andrew J. Groszek, Matthew J. Davis, Tapio P. Simula
We numerically model decaying quantum turbulence in two-dimensional disk-shaped Bose-Einstein condensates, and investigate the effects of finite temperature on the turbulent dynamics. We prepare initial states with a range of condensate temperatures, and imprint equal numbers of vortices and antivortices at randomly chosen positions throughout the fluid. The initial states are then subjected to unitary time-evolution within the c-field methodology. For the lowest condensate temperatures, the results of the zero temperature Gross-Pitaevskii theory are reproduced, whereby vortex evaporative heating leads to the formation of Onsager vortex clusters characterised by a negative absolute vortex temperature. At higher condensate temperatures the dissipative effects due to vortex-phonon interactions tend to drive the vortex gas towards positive vortex temperatures dominated by the presence of vortex dipoles. We associate these two behaviours with the system evolving toward an anomalous non-thermal fixed point, or a Gaussian thermal fixed point, respectively.
Quantum coherence from commensurate driving with laser pulses and decay
Goetz S. Uhrig
Non-equilibrium physics is a particularly fascinating field of current research. Generically, driven systems are gradually heated up so that quantum effects die out. In contrast, we show that a driven central spin model including controlled dissipation in a highly excited state allows us to distill quantum coherent states, indicated by a substantial reduction of entropy. The model is experimentally accessible in quantum dots or molecules with unpaired electrons. The potential of preparing and manipulating coherent states by designed driving potentials is pointed out.
Introducing iFluid: a numerical framework for solving hydrodynamical equations in integrable models
Frederik S. Møller, Jörg Schmiedmayer
We present an open-source Matlab framework, titled iFluid, for simulating the dynamics of integrable models using the theory of generalized hydrodynamics (GHD). The framework provides an intuitive interface, enabling users to define and solve problems in a few lines of code. Moreover, iFluid can be extended to encompass any integrable model, and the algorithms for solving the GHD equations can be fully customized. We demonstrate how to use iFluid by solving the dynamics of three distinct systems: (i) The quantum Newton's cradle of the Lieb-Liniger model, (ii) a gradual field release in the XXZ-chain, and (iii) a partitioning protocol in the relativistic sinh-Gordon model.
Parent Hamiltonian reconstruction of Jastrow-Gutzwiller wavefunctions
Xhek Turkeshi, Marcello Dalmonte
Variational wave functions have been a successful tool to investigate the properties of quantum spin liquids. Finding their parent Hamiltonians is of primary interest for the experimental simulation of these strongly correlated phases, and for gathering additional insights on their stability. In this work, we systematically reconstruct approximate spin-chain parent Hamiltonians for Jastrow-Gutzwiller wave functions, which share several features with quantum spin liquid wave-functions in two dimensions. Firstly, we determine the different phases encoded in the parameter space through their correlation functions and entanglement content. Secondly, we apply a recently proposed entanglement-guided method to reconstruct parent Hamiltonians to these states, which constrains the search to operators describing relativistic low-energy field theories - as expected for deconfined phases of gauge theories relevant to quantum spin liquids. The quality of the results is discussed using different quantities and comparing to exactly known parent Hamiltonians at specific points in parameter space. Our findings provide guiding principles for experimental Hamiltonian engineering of this class of states.
Multi-scale mining of kinematic distributions with wavelets
Ben G. Lillard, Tilman Plehn, Alexis Romero, Tim M. P. Tait
Typical LHC analyses search for local features in kinematic distributions. Assumptions about anomalous patterns limit them to a relatively narrow subset of possible signals. Wavelets extract information from an entire distribution and decompose it at all scales, simultaneously searching for features over a wide range of scales. We propose a systematic wavelet analysis and show how bumps, bump-dip combinations, and oscillatory patterns are extracted. Our kinematic wavelet analysis kit KWAK provides a publicly available framework to analyze and visualize general distributions.
Yang-Baxter integrable Lindblad equations
Aleksandra A. Ziolkowska, Fabian H.L. Essler
We consider Lindblad equations for one dimensional fermionic models and quantum spin chains. By employing a (graded) super-operator formalism we identify a number of Lindblad equations than can be mapped onto non-Hermitian interacting Yang-Baxter integrable models. Employing Bethe Ansatz techniques we show that the late-time dynamics of some of these models is diffusive.
Entanglement spreading and quasiparticle picture beyond the pair structure
Alvise Bastianello, Mario Collura
The quasi-particle picture is a powerful tool to understand the entanglement spreading in many-body quantum systems after a quench. As an input, the structure of the excitations' pattern of the initial state must be provided, the common choice being pairwise-created excitations. However, several cases exile this simple assumption. In this work, we investigate weakly-interacting to free quenches in one dimension. This results in a far richer excitations' pattern where multiplets with a larger number of particles are excited. We generalize the quasi-particle ansatz to such a wide class of initial states, providing a small-coupling expansion of the Renyi entropies. Our results are in perfect agreement with iTEBD numerical simulations.
Symmetry resolved entanglement in gapped integrable systems: a corner transfer matrix approach
Sara Murciano, Giuseppe Di Giulio, Pasquale Calabrese
We study the symmetry resolved entanglement entropies in gapped integrable lattice models. We use the corner transfer matrix to investigate two prototypical gapped systems with a U(1) symmetry: the complex harmonic chain and the XXZ spin-chain. While the former is a free bosonic system, the latter is genuinely interacting. We focus on a subsystem being half of an infinitely long chain. In both models, we obtain exact expressions for the charged moments and for the symmetry resolved entropies. While for the spin chain we found exact equipartition of entanglement (i.e. all the symmetry resolved entropies are the same), this is not the case for the harmonic system where equipartition is effectively recovered only in some limits. Exploiting the gaussianity of the harmonic chain, we also develop an exact correlation matrix approach to the symmetry resolved entanglement that allows us to test numerically our analytic results.
Superfluids as higher-form anomalies
Luca V. Delacrétaz, Diego M. Hofman, Grégoire Mathys
We recast superfluid hydrodynamics as the hydrodynamic theory of a system with an emergent anomalous higher-form symmetry. The higher-form charge counts the winding planes of the superfluid -- its constitutive relation replaces the Josephson relation of conventional superfluid hydrodynamics. This formulation puts all hydrodynamic equations on equal footing. The anomalous Ward identity can be used as an alternative starting point to prove the existence of a Goldstone boson, without reference to spontaneous symmetry breaking. This provides an alternative characterization of Landau phase transitions in terms of higher-form symmetries and their anomalies instead of how the symmetries are realized. This treatment is more general and, in particular, includes the case of BKT transitions. As an application of this formalism we construct the hydrodynamic theories of conventional (0-form) and 1-form superfluids.
Locally quasi-stationary states in noninteracting spin chains
Maurizio Fagotti
Locally quasi-stationary states (LQSS) were introduced as inhomogeneous generalisations of stationary states in integrable systems. Roughly speaking, LQSSs look like stationary states, but only locally. Despite their key role in hydrodynamic descriptions, an unambiguous definition of LQSSs was not given. By solving the dynamics in inhomogeneous noninteracting spin chains, we identify the set of LQSSs as a subspace that is invariant under time evolution, and we explicitly construct the latter in a generalised XY model. As a by-product, we exhibit an exact generalised hydrodynamic theory (including "quantum corrections"). | CommonCrawl |
Mon, 21 Dec 2020 05:59:11 GMT
4.5: Review Problems
[ "article:topic", "authortag:waldron", "authorname:waldron", "showtoc:no" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FBookshelves%2FLinear_Algebra%2FMap%253A_Linear_Algebra_(Waldron_Cherney_and_Denton)%2F04%253A_Vectors_in_Space_n-Vectors%2F4.05%253A_Review_Problems
Book: Linear Algebra (Waldron, Cherney, and Denton)
4: Vectors in Space, n-Vectors
Contributed by David Cherney, Tom Denton, & Andrew Waldron
Professor (Mathematics) at University of California, Davis
1. When he was young, Captain Conundrum mowed lawns on weekends to help pay his college tuition bills. He charged his customers according to the size of their lawns at a rate of 5 cent per square foot and meticulously kept a record of the areas of their lawns in an ordered list:
A=(200,300,50,50,100,100,200,500,1000,100)\, .
He also listed the number of times he mowed each lawn in a given year, for the year 1988 that ordered list was
f=(20,1,2,4,1,5,2,1,10,6)\, .
a) Pretend that \(A\) and \(f\) are vectors and compute \(A\cdot f\).
b) What quantity does the dot product $A\dotprod f$ measure?
c) How much did Captain Conundrum earn from mowing lawns in 1988? Write an expression for this amount in terms of the vectors \(A\) and \(f\).
d) Suppose Captain Conundrum charged different customers different rates. How could you modify the expression in part c) to compute the Captain's earnings?
(2) Find the angle between the diagonal of the unit square in \(\mathbb{R}^{2}\) and one of the coordinate axes.
(3) Find the angle between the diagonal of the unit cube in \(\mathbb{R}^{3}\) and one of the coordinate axes.
(n) Find the angle between the diagonal of the unit (hyper)-cube in \(\mathbb{R}^{n}\) and one of the coordinate axes.
(\(\infty\) What is the limit as \(n \to \infty\) of the angle between the diagonal of the unit (hyper)-cube in \(\mathbb{R}^{n}\) and one of the coordinate axes?
3. Consider the matrix
\(M = \begin{pmatrix}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta \\
\end{pmatrix}
\) and the vector \(X = \begin{pmatrix}x\\y\end{pmatrix}\).
a) Sketch \(X\) and \(MX\) in \(\mathbb{R}^{2}\) for several values of \(X\) and \(\theta\).
b) Compute \(\frac{||MX||}{||X||}\) for arbitrary values of \(X\) and \(\theta\).
c) Explain your result for (b) and describe the action of \(M\) geometrically.
4. (Lorentzian Strangeness). For this problem, consider \(\mathbb{R}^{n}\) with the Lorentzian inner product defined in example 46 of section 4.3.
a) Find a non-zero vector in two-dimensional Lorentzian space-time with zero length.
b) Find and sketch the collection of all vectors in two-dimensional Lorentzian space-time with zero length.
c) Find and sketch the collection of all vectors in three-dimensional Lorentzian space-time with zero length.
5. Create a system of equations whose solution set is a 99 dimensional hyperplane in \(\Re^{101}\).
6. Recall that a plane in \(\Re^{3}\) can be described by the equation
$$n \cdot \begin{pmatrix}x\\ y\\ z\end{pmatrix}=n\cdot p$$
where the vector \(p\) labels a given point on the plane and \(n\) is a vector normal to the plane. Let \(N\) and \(P\) be vectors in \(\Re^{101}\) and
$$X=\begin{pmatrix}x^{1}\\x^{2}\\ \vdots\\ x^{101}\end{pmatrix}.$$
What kind of geometric object does \(N\cdot X= N\cdot P\) describe?
u=\begin{pmatrix}1\\1\\1\\ \vdots \\ 1\end{pmatrix} {\rm ~and~} v= \begin{pmatrix}1\\2\\3\\ \vdots\\ \! 101\!\end{pmatrix}
Find the projection of \(v\) onto \(u\) and the projection of \(u\) onto \(v\). (\(\textit{Hint:}\) Remember that two vectors \(u\) and \(v\) define a plane, so first work out how to project one vector onto another in a plane. The picture from Section 14.4 could help.)
8. If the solution set to the equation \(A(x)=b\) is the set of vectors whose tips lie on the paraboloid \(z=x^{2}+y^{2}\), then what can you say about the function \(A\)?
9. Find a system of equations whose solution set is
\left\{ \begin{pmatrix}1\\1\\2\\0\end{pmatrix} +c_1 \begin{pmatrix}-1\\-1\\0\\1\end{pmatrix} +c_2 \begin{pmatrix}0\\0\\-1\\-3\end{pmatrix} \middle| \,c_1,c_2\in \Re
Give a general procedure for going from a parametric description of a hyperplane to a system of equations with that hyperplane as a solution set.
10. If \(A\) is a linear operator and both \(x=v\) and \(x=cv\) (for any real number \(c\)) are solutions to \(Ax=b\), then what can you say about \(b\)?
David Cherney, Tom Denton, and Andrew Waldron (UC Davis)
4.4: Vectors, Lists and Functions- \(\mathbb{R}^{S}\)
David Cherney, Tom Denton, & Andrew Waldron
authortag:waldron | CommonCrawl |
Large ice loss variability at Nioghalvfjerdsfjorden Glacier, Northeast-Greenland
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
Ice thickness and volume changes across the Southern Alps, New Zealand, from the little ice age to present
Jonathan L. Carrivick, William H. M. James, … Andrew M. Lorrey
Extensive inland thinning and speed-up of Northeast Greenland Ice Stream
Shfaqat A. Khan, Youngmin Choi, … Anders A. Bjørk
Glacier thickness and ice volume of the Northern Andes
Maximillian Van Wyk de Vries, David Carchipulla-Morales, … Verónica G. Minaya
Dynamic ice loss from the Greenland Ice Sheet driven by sustained glacier retreat
Michalea D. King, Ian M. Howat, … Adelaide Negrete
Ice velocity and thickness of the world's glaciers
Romain Millan, Jérémie Mouginot, … Mathieu Morlighem
Mt. Everest's highest glacier is a sentinel for accelerating ice loss
Mariusz Potocki, Paul Andrew Mayewski, … Sean Birkel
Two decades of glacier mass loss along the Andes
I. Dussaillant, E. Berthier, … L. Ruiz
Transient ice loss in the Patagonia Icefields during the 2015–2016 El Niño event
Demián D. Gómez, Michael G. Bevis, … Gino Casassa
Rapid accelerations of Antarctic Peninsula outlet glaciers driven by surface melt
Peter A. Tuckett, Jeremy C. Ely, … Joshua Howard
Christoph Mayer ORCID: orcid.org/0000-0002-4226-46081,
Janin Schaffer ORCID: orcid.org/0000-0002-1395-78512,
Tore Hattermann ORCID: orcid.org/0000-0002-5538-22672,3,
Dana Floricioiu ORCID: orcid.org/0000-0002-1647-71914,
Lukas Krieger ORCID: orcid.org/0000-0002-2464-31024,
Paul A. Dodd ORCID: orcid.org/0000-0002-4236-90715,
Torsten Kanzow2,
Carlo Licciulli1 &
Clemens Schannwell6
Nature Communications volume 9, Article number: 2768 (2018) Cite this article
31 Altmetric
Climate-change impacts
Cryospheric science
Physical oceanography
Nioghalvfjerdsfjorden is a major outlet glacier in Northeast-Greenland. Although earlier studies showed that the floating part near the grounding line thinned by 30% between 1999 and 2014, the temporal ice loss evolution, its relation to external forcing and the implications for the grounded ice sheet remain largely unclear. By combining observations of surface features, ice thickness and bedrock data, we find that the ice shelf mass balance has been out of equilibrium since 2001, with large variations of the thinning rates on annual/multiannual time scales. Changes in ice flux and surface ablation are too small to produce this variability. An increased ocean heat flux is the most plausible cause of the observed thinning. For sustained environmental conditions, the ice shelf will lose large parts of its area within a few decades and ice modeling shows a significant, but locally restricted thinning upstream of the grounding line in response.
Nioghalvfjerdsfjorden Glacier or 79 North Glacier has the largest ice shelf in Greenland with a length of more than 70 km and a width of about 20 km at mid-distance. Together with its neighbors Zachariæ Isstrøm and Storstrømmen, it is one of the major outlets of the North East Greenland Ice Stream (NEGIS), sharing a catchment area of almost 200,000 km² or 12% of the Greenland ice sheet area1. This region has the potential to rise global sea level by 1.1 m in the unlikely case of complete loss of this ice sheet sector1. The region also represents a major transport route for ice discharge into the Nordic Seas where it adds to the oceanic freshwater budget and large-scale circulation (for example see ref.2).
While other regions of the Greenland ice margin have already shown strong mass loss3, the mass balance of the upper parts of NEGIS was long considered to be close to equilibrium (for example see ref.4), while thinning was observed closer to the margin. It was assumed that the region was not much influenced by climate change until the beginning of the new millennium5,6. The area of 79 North Glacier has remained remarkably stable since the first observations of its calving front in 19067, with no signs of ice shelf break-up even during recent years. However, the upstream NEGIS sector has shown increasing thinning rates since 20067 and Zachariæ Isstrøm has experienced the disintegration of its frontal ice shelf and an increase in ice flux of 50% between 1976 and 2015. Currently, Zachariæ Isstrøm loses about 5 Gt year−11. Even though the impact of changes in environmental parameters is not known in detail, the increased thinning rates are likely related to increased air temperatures leading to higher melt rates and a reduction in summer sea ice concentration. This facilitates higher calving and retreat rates, associated with a positive feedback due to a retrograde bed slope and reduction in buttressing from the glacier margins. In addition, the entry of warm subsurface ocean water could intensify the mass loss by melting6.
This recent evolution of Zachariæ Isstrøm and its potential causes raises the imminent question about the future stability of its northern neighbor, the 79 North Glacier's floating part. Recently observed warming of Atlantic Waters in the Nordic Seas (e.g., in Fram Strait8) and the Arctic Ocean9, increasing Arctic surface air temperatures10 and more regular fast ice summer breakups since 200111 might affect this ice shelf in a similar way. The loss of the ice shelf might imply a reduction of the buttressing of the ice sheet, leading to enhanced ice discharge, progressive thinning and retreat of the grounding line in dependence of the bedrock geometry12. The ice sheet thinning at the grounding line leads to larger surface slopes and thus to enhanced ice flow, which will gradually spread further upstream until a new balance geometry is reached.
The floating part of 79 North Glacier fills the fjord between Kronprins Christian Land in the North and Lambert Land in the South (Fig. 1). Seismic measurements have revealed a deep ocean cavity beneath the ice shelf13 that extends down to 900 m below sea level near the grounding line and rises eastwards towards the calving front, where a sill culminates in several shallow bedrock highs, which are pinning the ice shelf.
The floating part of 79 North Glacier with the data sets used in the study. 79 North Glacier in Northeast Greenland, with the locations of the cross profiles used in this analysis (dark blue: Radar and IceBridge flight lines, light blue: MGO cross profile as in Figs. 2,4 and 6). The location of the Midgardsormen experiment is shown as a red dot, while the seismic measurements from 1997 and 1998 are shown as blue dots13. The light green dots represent the CTD profiles used in this study. The green lines represent transects used for the plume model simulations. The light blue box shows the geographical extent of Fig. 4 and the upstream part of the MGO-ridge (gray winding feature). The grounding line is indicated by the red line in the lower left corner. The ice shelf front is located to the right of the image and towards the North (Djimphna Sund, at the upper right part of the image). The background image is taken from a Landsat 5 scene from 25 July 1998. The locations of the weather stations in Danmarkshavn and Station Nord are shown in the inset map as yellow dots
An ice ridge along the northern boundary of the ice shelf represents a remarkable surface feature of the glacier. It was named Midgardsormen (MGO-ridge) after the disappeared middle earth snake in Nordic mythology, due to its resemblance of a winding snake. The location of Midgardsormen and of the different data sets used in this study are shown in Fig. 1. The fact that MGO-ridge is not stable in time lead to the hypothesis that its migration is linked to changes in ice thickness and can be used to construct a time series of the ice shelf thickness evolution. We analyze the impact of possible forcing mechanisms to identify the main drivers of the observed thinning and use an ice dynamic model to investigate the response of the grounded ice. Oceanic energy transport is the most likely source for the strong variability of the observed ice thickness changes. We find that the floating part of the glacier very likely will disappear during the coming decades, while the effect on the adjacent grounded ice is significant, but locally restricted.
The lateral grounding line of Midgardsormen
To date, no detailed analysis of temporal ice thickness changes are available for 79 North Glacier, apart from short period remote sensing observations (for example see ref.14). Here, we present a time series of ice thickness changes dating back to 1998, with a temporal resolution of better than three years.
The migration of the MGO-ridge is utilized to investigate thickness changes of the floating ice tongue over time with an approximately annual temporal resolution (Table 1). The detailed geophysical measurements across Midgardsormen in 1998 reveal that it represents a special type of grounding line, with the upstream floating ice (according to the grounding line delineation of ref.15) re-grounding at a shallow angle with respect to ice flow on a lateral bedrock shoulder in the fjord (Fig. 2b). This results in a peculiar pressure ridge at the surface, which we appropriately name Midgardsormen (Fig. 2a). Results from the seismic measurements in Fig. 3a show that the bedrock rises from the central trough just south of Midgardsormen and then forms a gently sloping, shallow plain. A transition from floating to grounded ice north of Midgardsormen (to the left of 2380 m on the x-axis of Fig. 3) is confirmed by the dampening of the tidal tilt across the ice ridge (Supplementary Table 1). The amplitude of the tidal signal, recorded at the tiltmeter site of NF1 located at 383 m south of Midgardsormen on the floating ice shelf (Fig. 2b), is reduced by a factor of 10 at the center of the ridge and by a factor of 30 at a distance of 185 m on the northern flank. The lateral strain, associated with the strong transversal ice velocity gradient is responsible for the formation of the MGO-pressure ridge that delineates the grounding line position. The exact width of the subglacial bedrock shoulder between the location of the Midgardsormen experiment and the seismic cross profile about 37 km further downstream is unclear, but the continuous surface expression of the ice ridge on the Landsat images (Fig. 4) suggests an extent of several kilometers to the East. The relatively smooth ice surface elevation of the freely floating central part of the glacier south of the MGO-ridge suggests that the ice shelf is in hydrostatic equilibrium, such that the depth of the ice draft below sea level is a direct measure of the local ice thickness. In the following, we utilize the fact that small changes in ice thickness lead to large movements of the grounding line over the gently sloping bedrock, to reconstruct the history of ice shelf thickness.
Table 1 Midgardsormen grounding line migration
Migardsormen ice ridge and the local measurements. a Ice ridge of Midgardsormen on the northern part of the ice shelf, close to the location of the ice ridge measurements in the 1990s. The view is upstream towards the grounding line of 79 North Glacier (photo: C. Mayer, 1998). b measurements of the Midgardsormen experiment. The blue line represents the ice ridge, the red line the seismic profile. Ice velocities are displayed as black arrows and the positions of tilt meters (T) are indicated. The grounded part is indicated by gray shading
Glacier geometry along the cross section of GPR and seismic transects. a Cross section of Midgardsormen ridge, as revealed by the seismic measurements in 1998 (location: red dot in Fig. 1). b Glacier geometry along the first 3.2 km of the light blue profile in Fig. 4, according to the airborne radar measurements in 1997. The x-axis coordinates are identical in a and b. The y-axis in a represents the true scale, while the vertical axis is exaggerated three-fold in b
Migration of Midgardsormen from 1994 until 2014. Position of Midgardsormen in 1994 (red dotted line) and 2014 (dark purple dashed line). The red dot represents the location of the seismic experiment at Midgardsormen in 1998, while the blue line indicates the flight line of the airborne radar in 1997 and the elevation profile of the Icebridge ATM in 2012 (background image: Landsat 8 scene from 12 Jul 2014). The light blue section of the flight line represents the part displayed in Fig. 3b
Grounding line migration and related thickness changes
Co-registered Landsat scenes reveal that the position of Midgardsormen moved about 2.1 km towards NW from 1994 to 2014 (Fig. 4). The displacements between the acquisition times of all Landsat scenes are given in Table 1, showing a successive northward (up-slope) displacement of the MGO-ridge, which is consistent with a continuous thinning of the ice shelf. The basal topography of the subglacial fjord shoulder north of the Midgardsormen position in 1998 is known from the seismic measurements near the ridge and the airborne radar ice thickness measurements towards the northern ice margin (Fig. 3). Based on this information, we estimate the observed northward migration of Midgardsormen between 1998 and 2014 to correspond to an ice thickness reduction of 85.9 m, or a mean ice thickness loss of 5.3 m year−1 during this period.
The Landsat images show that Midgardsormen had moved another 682 m between 1994 and 1998. This indicates an earlier onset of the thinning, although no measurements are available for the fjord bottom at the location of the 1994 grounding line, which inhibits the quantification of ice thickness change in this early period.
In order to relate these observations to the larger region, we compare ice shelf thicknesses from ground penetrating airborne radar (cross profile 365000–392000, Fig. 1, western profile) and buoyancy derived ice thicknesses from TanDEM-X surface elevations along the same profile, south of the 1998 Midgardsormen position. Details about the used TanDEM-X scenes (Supplementary Table 2) and the resulting thickness changes (Supplementary Table 3) are provided in the Supplementary Information. Uncertainties connected to the TanDEM-X elevation calculations are presented in the Methods section. The results show a mean ice thickness reduction of 89.5 m between 1997 and 2014. Moreover, the ice shelf lost another 8.6 m according to surface elevation changes from TanDEM-X data that are analyzed over a larger floating area between December 2014 and September 2016. The comparison with surface elevation data from Operation IceBridge ATM data results in similar differences (Supplementary Table 4). The mean annual ice thickness loss was about 5.16 m year−1 for the entire period. Additionally, surface elevation changes over the entire floating part of the ice shelf have been calculated from overlapping TanDEM-X acquisitions. The resulting ice thickness changes in the time periods 2011–2012, 2012–2014, and 2014–2016 are found in the Fig. 5 (numerical values in Supplementary Table 3).
Thickness changes of 79 North Glacier based on the migration of Midgardsormen. Temporal evolution of annual mean thickness changes as inferred from the displacement of Midgardsormen between 1998 and 2015. Red dots: date of grounding line detection, blue dashed line: mean annual thickness change between 1998 and 2016. The green boxes represent ice thickness changes from TanDEM-X surface elevation differences derived for recent periods over a larger part of the floating ice shelf. The vertical lines represent the error of the thickness change
The total ice thickness change, based on the cross profile comparison, is almost identical to the magnitude derived from the total lateral grounding line displacement. In the following, we thus assume that the local changes in ice thickness of higher temporal resolution, inferred from the grounding line migration, are representative of the mean regional thickness changes along the entire floating part of the cross profiles. Across the entire floating part of the cross profile an ice thickness loss of 26% was detected from 1998 to 2014 (mean ice thickness in 1998: 338.2 ± 20.1 m), while the corresponding thickness change at the grounding line is 30%.
Based on the analysis of the grounding line migration, the thickness change reveals the following pattern: In the late 1990s, the floating ice tongue seems to be close to an equilibrium with a minor change in ice thickness (in spite of the observed considerable grounding line migration between 1998 and 2001). In the first years of the new millennium, the situation changes considerably and an ice thickness loss of more than 12 m is derived for 2001/2002. After this period, the thinning gradually reduces to rather low values of 1.5 m year−1 during 2006–2009. Between 2009 and 2012 the annual ice thinning intensifies again, reaching 9.0 m year−1 for the entire period (more than 12 m year−1 in 2009/2010), before it reduces again slightly to 6.5 m year−1 between 2012 and 2014. The analysis of the recent TanDEM-X elevation models provides additional information for the period 2014 until 2016, where the ice thickness is reduced by another 5.2 m year−1 in the region of Midgardsormen.
Potential reasons for the ice thickness variations
There are several potential reasons for this considerable volume loss of the ice shelf. A change in the horizontal velocity structure (e.g., a slow down near the grounding line and/or an acceleration towards the calving front) could lead to dynamic thinning of the floating ice. Also, variations in the surface mass balance could induce higher thinning rates and thus a reduction of the ice thickness. Increasing melt rates, however, might also be induced by changes in the subglacial oceanic conditions (such as ocean warming).
Ice velocities showed almost no change during the entire period since 1998. The mean ice velocity along the ice shelf cross profile at Midgardsormen, determined with the IMCORR correlation algorithm16, amounts to 859 ± 10 m year−1 for the period 1998–2001 and to 843 ± 15 m year−1 for 2009 until 2010. Also the along flow velocity gradients did not change significantly over this period, indicating that the calculated ice thickness change is not related to dynamic thinning of the glacier. The strain rate in the center of the ice shelf, south of Midgardsormen and between the two ice shelf cross profiles in Fig. 1 (blue lines), shows only a very small increase from −0.0357 ± 0.0263 year−1 in 1998 to −0.0374 ± 0.0278 year−1 in 2009. This relates to an increase in dynamic thickening of 0.56 ± 0.43 m year−1 (from 11.78 ± 8.68 m year−1 to 12.34 ± 9.18 m year−1, respectively). Therefore, a change in ice dynamics cannot be the main reason for the observed ice thickness reduction of 91.1 m between 1998 and 2016.
To estimate the role of variations of the surface mass balance for the temporal variability of the ice thickness, we evaluated potential surface melt magnitudes based on temperature records from the closest weather stations Danmarkshavn and Station Nord. However, only Danmarkshavn weather station provides a continuous temperature record from 1958 until now17. The recorded summer air temperatures indicate a tendency to higher surface melt rates in the recent years (Supplementary Fig. 1). The surface melt rate (SMR) is based on the positive degree day sums (PDD) per year and computed by
$${\mathrm{SMR}} = k_{{\mathrm{ice}}}\left({\mathrm{PDD}}-{{\mathrm{PDD}}_{0.5msnow}}\right) + k_{{\mathrm{snow}}}{{\mathrm{PDD}}_{0.5msnow}}.$$
The degree day factor for melting glacier ice (kice) is taken to be kice = 9.6 mm K-1 based on measurements by18 on Storstrømmen Glacier (Northeast Greenland). The degree day factor for melting snow (ksnow) is taken to be 40% of kice in accordance to19. We assume an average snow cover of 0.5 m thickness that needs to be melted before glacier ice is melting. Based on these assumptions, we find that the maximum year-to-year variation of surface ice melt is 1.4 m year−1 only (Supplementary Fig. 1). Even though there is a tendency to higher surface melt rates especially after 2000, the variations in surface melt rates are generally too small to explain the observed ice thickness changes derived from our analysis above. Also the high surface melt rate in 2008 is not reflected in the ice thickness evolution that was derived from the Midgardsormen migration.
Next, we consider the influence of changes in oceanic forcing on basal melting of the ice shelf. It has been shown20 that the cavity is filled by a lower layer of Atlantic Water with maximum temperatures around +1 °C and an upper layer of colder and fresher Polar Water on top. Since the depth of the temperature/salinity gradient that separates these two layers coincides with the depth of the MGO-grounding line, i.e., ranging between 170 and 250 m (not shown), a thinning of the ice shelf would lead to a decrease in ocean temperature at the ice-ocean interface, if ocean properties remained unchanged. In contrast, the succession of CTD profiles shows an overall warming and thickening of the Atlantic Water layer in the cavity, such that temperatures at the respective depth of the MGO-grounding line increased by 0.2 °C between 1998 and 2014, despite its migration to shallower depth (Fig. 6). At 175 m depth, i.e., the depth at the grounding line position in 2014, temperatures increase from 0 to 0.5 °C. The evolution in the ice shelf cavity is consistent with hydrographic observations that show coherent warming of the Atlantic water along Norske Trough - the main pathway across the continental shelf of Northeast Greenland from the shelf break in Fram Strait towards 79 North Glacier21.
Ice shelf geometry and water temperatures at Midgardsormen. Changes in ice shelf geometry in the Midgardsormen region between 1998 and 2014 and the location of the grounding line for the years with suitable satellite imagery. The water temperatures for the years of available CTD measurements are shown for the respective depth levels of the grounding line and a water depth of 175 m (grounding line depth in 2014)
The effects of the observed ocean warming on the ice shelf basal mass loss are assessed with a simple, but well established ice-shelf plume model22. The area averaged, ensemble-mean basal melt rates from the model yield 8.7 ± 1.1 m year-1 for 1998 and 12.2 ± 1.6 m year−1 for 2014, which corresponds to a 40% increase in basal melt during that period. To translate the basal melt rate distribution along the plume path into an accumulative thinning of the ice shelf along a flowline, we calculate the path integral of basal mass loss for an ice column that is advected from the grounding line about 20 km downstream toward the approximate point where the glacier thinning was observed (i.e., at Midgardsormen (Fig. 1)). For that purpose, spatially varying ice flow velocities from the 2000/2001 MEaSUREs Greenland Ice Velocity Map5,23 were interpolated onto each of the five ice base profiles used for the plume model (Supplementary Fig. 2), and integrations for each profile were repeated with eleven different starting points from the grounding line and downstream, shifted in 1-km steps. The advective time scale towards the Midgardsormen cross-flow profiles is approximately 20 years. Assuming that the effects of strain thinning and surface mass balance on the ice thickness evolution remain unchanged, the difference in accumulative thinning using either 1998 or 2014 melt rates gives the melt induced ice mass loss of the glacier for an instantaneous and sustained ocean warming. The results yield a total ensemble-mean ocean induced ice thickness loss of 61 ± 20 m. The estimate is comparable to the observed thinning of 86 ± 6.4 m at Midgardsormen between 1998 and 2014, showing that variations in oceanographic conditions are generally capable of inducing observed variability in thinning rates, while more extreme ocean temperatures than observed in 2014 and additional contributions from changes in surface melt and dynamic thinning may explain the low estimate of the thickness loss based on the melt model alone.
Consequences of the ice shelf thinning
A numerical model of the ice flow dynamics was used to assess the buttressing of the floating tongue and the consequences of its further thinning for the grounded ice. For recent conditions the ice shelf is strongly buttressed, especially within the parallel sided main fjord. Only the lateral regions of the frontal part, where the ice shelf expands in the widening fjord, show a less expressed buttressing (Supplementary Fig. 3). This indicates that the glacier remains in a stable condition, even if the frontal part would be removed.
Time dependent experiments, starting from the modern ice geometry (Supplementary Fig. 4), show that a change from equilibrium conditions to a strong negative mass balance (as it is indicated by our results on thinning rates) will have a strong impact on the floating ice, as well as on the adjacent ice sheet (Supplementary Figs. 5, 6). The results for a 100 year forward scenario with a 1.5 times mass balance forcing represent an estimate of the ice shelf/ice sheet evolution for the coming decades. The mean thickness of the ice shelf reduces by about 45% (from 190 m to 75 m) during the modeled 100 years (Supplementary Fig. 5a). However, the proportion of very thin ice (<10 m) increases from almost 0% to almost 70% (Supplementary Fig. 5b), which indicates instability for the largest part of the ice shelf. A removal of the entire ice shelf in the fjord would lead to a strong thinning of the upstream ice sheet, at least along the 40 km long flowline simulated in our experiments (points 1–7 in Supplementary Fig. 4). At the grounding line the thinning is about 200 m, which leads to a migration of the grounding line by approximately 10 km. Here, the ice thickness reduction reaches 80–100 m after about 40 years, while the flux increases by about 30–40% (not shown here). The thinning still is about 30–50 m between 15 and 20 km upstream of the grounding line. Eventually the ice sheet stabilizes with ice thickness lowered between 100–120 m at the end of the simulation.
Based on our observations and calculations, 79 North Glacier has lost almost one third of its thickness in the region of the 1998 Midgardsormen experiment between 1998 and 2016. Because the ice shelf is freely floating in the fjord, it can be assumed that this relative thinning is representative of a large part of the ice shelf. Otherwise, the lateral grounding line would not have moved up to shallower bedrock. The results are consistent with previous investigations concerning the bulk mass loss (for example see ref.1, found a 30% total ice thickness loss downstream of the main grounding line between 1999 and 2014). We can confirm a similar magnitude of ice loss for the central part of the ice shelf and for a similar period. However, our study shows for the first time the temporal pattern in mass wastage for a period of 18 years. The uncertainties in surface elevation derived from remote sensing images and the lack of repeat elevation information make it impossible to infer the temporal evolution from existing remote sensing information before 2010. However, the combination of bedrock topography and lateral grounding line migration provides the necessary input for deriving such a time series back to 1998. We have shown that the ice shelf experienced a change from a state close to equilibrium to a state of successive thinning, where the interannual variability is large. For equilibrium conditions, the ice thickness remains constant because ice dynamic thickening balances the total melt rates of about 12 m year−1 in the region of Midgardsormen, while the thinning doubles during phases of maximum thickness loss in 2002 and 2010.
According to our analysis, there are two periods with very strong mass loss from 2001 until 2005 and from 2009 until 2010. After 2010, the mass loss remains high, but with a decreasing tendency until 2016. Even in the period 2014–2016, the mass loss is almost as high as the mean value of −5.3 m year−1 over the entire period 1998–2016. There is no information about the bedrock topography further to the South of Midgardsormen, which would allow the mass budget estimate for the ice ridge migration from 1994 until 1998. Some degree of thickness loss must have happened during this period to facilitate the observed migration of 682 m. This also indicates that the fjord shoulder extends at least this distance further towards the fjord center.
We investigated the potential causes for the observed ice loss, finding that neither a change in ice dynamics, nor a more negative surface mass balance are likely to explain the persistent thinning of the glacier. Instead, we demonstrated that observed variations in ocean temperature at the ice base would induce sufficient additional melting to cause the estimated mass loss of the ice shelf, indicating that the observed thinning relates to changing ocean conditions in front of 79 North Glacier. Warming in the subpolar North Atlantic since the mid-1990s24,25,26, a thickening of the AW layer in the Irminger Sea27 and a warming of AW in the Arctic Ocean in the 2000s9 have been observed. Warm anomalies in Fram Strait in 1999–2000 and 2005–20078, and a shoaling of the AW layer in the eastern Eurasian Basin28 suggest that these large-scale perturbations may also reach the Greenland coast. Recent studies show a consistent warming and thickening of the AW in eastern Fram Strait and on the North East Greenland continental shelf29. Although observations inside the ice shelf cavity are too sparse to scrutinize this trend, the existent hydrographic profiles suggest a successive warming and thickening of the AW layer that is consistent with the large-scale evolution.
However, the reason for the large interannual fluctuations of the thinning rates revealed by this study still need to be found. AW anomalies in Fram Strait take about 1.5 years and longer to reach the 79 North Glacier21, while fjord temperatures may vary greatly on shorter time scales. Also the transient adjustment of the cavity circulation further modulates the response of the glacier to ocean forcing30. Following the method of31 and using the hydrographic profiles to constrain the water mass transformation inside the ice shelf cavity (Supplementary Fig. 2b), it can be shown that the observed 40% increase in basal melting is associated with a 30% stronger cavity overturning circulation. Herein, the temperature difference between ingoing and outgoing waters (using the same definitions as ref.29) changes little between the different years, but the increased meltwater input at the ice base drives a more vigorous sub-ice shelf circulation that accomplishes the additional heat flux of 0.7 ± 1.8 × 1011 W into the cavity. This results in a reduction of the cavity exchange time scale from about 120 ± 26 days to about 90 ± 35 days, which further increases the sensitivity of the ice shelf to ocean changes30. Thus, while our analysis suggests that the ocean is likely the main driver of the observed changes at 79 North Glacier, the regional dynamics that control the heat transport into the ice shelf cavity and other contributors, such as subglacial discharge induced by surface melt or geothermal heat flux will need further attention to fully understand the observed thickness evolution.
Potential consequences for the future ice shelf stability: Despite the fact that 79 North Glacier has a more stable grounding line situation than Zachariæ Isstrøm (rising bedrock inland), the loss of the ice shelf might contribute to destabilize the entire, marine-based ice sheet sector. At the moment the ice shelf is well buttressed in the fjord and even the loss of the outer part would probably not change this. Without the buttressing effect of the floating ice tongue in the fjord, our simplified model approach demonstrates that the ice thickness strongly decreases and the grounding line retreats by about 10 km. This is comparable to the results of a recent study32 and poses the question if the disintegration of the ice shelf and its related consequences on the grounded ice are likely to happen in the near future. The ice thickness reduced by about 30% during the period 1998 until 2014. Compared with balanced conditions, implying a mass balance of −12 m year−1 in the region of Midgardsormen, the mean ice loss during the observation period results in an almost 1.5 times higher mass balance magnitude. The numerical simulations demonstrate that the thinning will lead to large areas of very thin ice, which are most likely unstable and large parts of the ice shelf will disappear during coming decades. Given that the environmental conditions already enabled ice thickness reductions up to 13 m within one year that process could be considerably faster for enhanced oceanic energy fluxes into the ice shelf cavity. Even though the consequences are serious for the neighboring part of the ice sheet, where the ice thins by about 200 m after the loss of the ice shelf, it seems that the increased fluxes will not reach far into the ice sheet during the next century, resulting in thickness losses in the order of 30 m about 20 km upstream. It needs to be considered, however, that our simple model setup is not appropriate to simulate the long-term feedback mechanisms. Therefore a more detailed investigation is required for investigating the long-term stability of the ice sheet in this sector of Greenland.
In-situ data preparation
For our analysis, we combine glaciological in situ observations, satellite data, sub ice shelf oceanographic measurements and climatological reanalysis results. The core dataset was collected during joint German-Danish campaigns in 1997 and 1998 within the framework of the project Climate Change and Sea Level (ENV4-CT095-0124), providing values for ice thickness and water depth below the ice shelf at the locations indicated in Fig. 1, based on single shot, 24-channel seismic records13. In addition to the regional mapping of the ice shelf geometry, a detailed seismic survey in 1998 across Midgardsormen (Fig. 2b), close to its western origin (center location: 79° 29.94´ N, 22° 17.35′ W), provides the ice thickness and the underlying bedrock elevation (Fig. 3a). In addition, ice velocities were determined by repeat-GPS measurements and a network of tilt meters recorded the spatial pattern of tidal movement of the ice.
These ground-based measurements are complemented by airborne ice thickness measurements that were carried out in 1997, using a low frequency ground penetrating radar system and covering a large part of the ice shelf and the adjacent grounded ice sheet (source: Microwave and Remote Sensing, DTU Space, the Technical University of Denmark). The lines of this data set, which are used in this study, are shown as dark blue lines in Fig. 1.
Grounding line migration by tracking Midgardsormen
The temporal evolution of MGO-ridge was tracked on scenes, selected from the Landsat archive, for the period 1998 until 2015 with roughly annual separation and an additional scene in 1994 (Table 1). For some years, no suitable scene could be identified due to cloud cover during the acquisition times (missing years in Table 1). Displacements of the MGO-ridge were measured manually on the co-registered images with a pixel size of 30 m. The combination of the errors of co-registration of the scenes and the identification of the Midgardsormen position is less than two pixels and thus better than 60 m. In addition, annual ice velocities of the floating ice were determined from surface feature displacements between the Landsat scenes using the IMCORR correlation algorithm16.
Errors involved in the ice shelf thickness change
We estimated the reference surface elevation of the airborne measurements relying on the combination of aircraft GPS positions and travel times of the electromagnetic wave through air (using the first reflection from the ice surface). The accuracy of the ice thickness estimation depends on the center frequency of the radar system and the travel speed of the radar waves in the ice. With the assumption of a wave velocity of 168 m μs−1 and the system frequency of 50 MHz, the possible vertical resolution (1/4th of the wavelength) of the radar system is about 0.84 m. The final accuracy, however, depends on the difference between the estimated and the wave travel speed in the ice column. Because there is no firn layer on the ice shelf, the radar wave velocity should be about v = 168 + -3.4 m μs−1 (2% error) and thus very close to the theoretical value33. A realistic error of determining the two way travel time to the ice/underground reflector is the half wavelength and thus about 1.7 m, or \(\varepsilon _{\mathrm{\tau }} =\)0.0101 μs in travel time. The mean two way travel time in the MGO region was τ = 2.4 μs. Therefore the resulting error in the derived ice thickness is
$$\Delta \varepsilon _{{\mathrm{rh}}} = \frac{1}{2}\sqrt {\tau ^2\varepsilon _{\mathrm{v}}^2 + v^2\varepsilon _\tau ^2} = \frac{1}{2}\sqrt {2.4^24^2 + 168^20.0101^2} = 4.2\,{\mathrm{m}}$$
according to the ref.33).
The GPS positioning error during the airborne survey was about 40 m, which relates to a relative change in ice thickness of about 3.5 m for a mean bed slope of 5°. The resulting ice thickness error from the radar measurements is therefore 5.5 m.
We infer ice thickness change from the floatation condition and thus the bedrock elevation at the identified grounding line position. In order to determine the accuracy of the bedrock elevation, also the surface elevation error of the radar data needs to be included. The surface elevation at the grounding line is derived from the ice thickness and the floatation criterion south of the MGO ridge. The best fit for the free-floating condition at the grounding line results in a residual mean square of the surface elevation of 1.45 m across the entire ice shelf profile. The precision of the surface elevation correction from the radar data to the digital terrain model on the ice-free grounded part of the profile, north of the MGO ridge, is in the order of 2 m. Therefore, the error of the surface elevation along the profile from the MGO ridge to the shoreline is within 2 m.
The error of the bedrock elevation thus results to \(\Delta \varepsilon _{\mathrm{b}} = \sqrt {10.4^2 + 2^2} = 5.8\,{\mathrm{m}}.\)
The reconstruction of the ice thickness based on bedrock elevation, MGO position and the floatation conditions thus is affected by a total error of 6.4 m.
Surface elevation data, errors, and inferred ice thickness
To assess temporal changes in ice thickness, the airborne measurements from 1997 are compared to more recent smoothed (mean values according to the airborne radar sampling size) surface elevation measurements from the NASA Airborne Topographic Mapper (ATM) (ILATM2_20120514_13570634) that were collected in the framework of Operation IceBridge in 2012 and 2014 across 79 North Glacier along an almost identical flight line. The accuracy of the ATM elevation model is given as 2 m. We reduce the absolute error of ATM by calibrating the data over stable ground with the surface reflection of the radar data from 1997.
In addition, TanDEM-X bistatic data acquired on 8-01-2011, 14-11-2012, 8-12-2014, and 28-09-2016 were used for spatially distributed surface elevation information. This data set is characterized by effective baselines ranging from 182 to 75 m with a corresponding height of ambiguity of 38–113 m (details: Supplementary Table 2). The InSAR digital elevation models (DEMs) cover about 30 × 50 km2 and were derived using the Integrated TanDEM-X Processor (ITP), the operational interferometric processor of the mission35. The absolute height error of the DEMs computed with ITP takes interferometric coherence and geometrical considerations into account36,37. As explained in detail in37 the TanDEM-X global DEM and all intermediate raw InSAR DEMs are affected by the absolute horizontal error, the absolute height error and the relative height error that describes local height variations. In the present study, which relies on TanDEM-X—TanDEM-X raw DEM differencing, we only quantify the absolute height error of each scene separately. The relative height error is estimated as random error together with the final elevation difference measurements. Moreover, the absolute horizontal error is negligible, because of the excellent geolocation accuracy of the ITP processor37.
Hence, the absolute height error is estimated over ice-free terrain from offsets to the TanDEM-X global DEM. It is different for each raw DEM and during the DEM differencing the respective absolute height errors add up independently to \({\mathrm{SE}}_{\Delta {\mathrm{z}}}\). Together with the statistical error of the elevation difference measurement over the floating ice tongue \({\mathrm{SE}}_{\Delta {\mathrm{h}}}\) the overall uncertainty \(\varepsilon _{\Delta h}\)is calculated as:
$$\varepsilon _{\Delta {\mathrm{h}}} = \sqrt {{\mathrm{SE}}_{\Delta {\mathrm{z}}}^2 + {\mathrm{SE}}_{\Delta {\mathrm{h}}}^2}$$
The uncertainties are reported in Supplementary Table 1. Based on the periods between the acquisitions, ice thickness change rates and their respective errors are reported from buoyancy calculations. Concerning additional errors from signal penetration, the backscattering coefficient σ0 has been analyzed over the floating ice tongue (Supplementary Table 2). We find values ranging from approx. −6 to −12 dB, therefore we assume a dominating surface scattering, because of the crevassed and rough surface.
For the DEMs used in the present study and over the floating part of the glacier the resulting height error is ranging from 0.65 to 1.35 m. The spatial resolution is approximately 12 m. All TanDEM-X RawDEMs are vertically co-registered to the TanDEM-X global DEM over ice-free areas. The error of TanDEM-X—TanDEM-X surface elevation differences over the floating part of the ice tongue is therefore estimated to be better than 0.2 m (Supplementary Table 3), which results in an error of the ice thickness estimate of less than ±2 m.
Finally, we compare measured ice thicknesses from the airborne ground penetrating radar acquisitions in 1997 with surface elevations derived from TanDEM-X elevation models and Operation Ice Bridge ATM profiles. The surface elevation was transferred to ice thickness across the floating ice shelf by using an ice density of 900 kg m−3 and an ocean water density of 1028 kg m−3.
Basal melt rates from plume modeling
The oceanic forcing as a driver of basal melting is assessed through four conductivity temperature depth (CTD) profiles that have been measured inside the ice shelf cavity between 1998 and 2014 during similar seasons (Supplementary Fig. 2b). The first profile was taken in August 1998 through a borehole drilled near the eastern end of the MGO-ridge (Fig. 1, the western CTD location). The other three profiles were taken in close proximity to each other near the northern calving front towards Dijmphna Sund (Fig. 1). Two of these profiles have been accomplished in September 200920,31, and one in September 201429. We use an ice-shelf plume model22 to estimate the sensitivity of basal melt rates to changes in ocean temperature, based on these observations. While the model provides reliable estimates of basal melting for one-dimensional configurations38, the plume dynamics were augmented to account for a varying width (lateral extent) of the plume perpendicular to the flow direction, which is important to provide area average melt rates for an uneven distribution of ice shelf area at different depths39. This is represented in the model by a non-dimensional parameter of change in ice shelf area as function of depth (dw/dz), which was computed by binning and normalizing the gridded ice draft data40 into 200 m depth bins. To account for the varying geometry of the ice base for different plume paths, a set of five representative profiles of the ice base slope were used (Fig. 1, green lines). The profile lengths vary between 62 and 70 km along the axis of the glacier flow and were chosen to have a regular spacing across the glacier between the grounding line and the calving front (Supplementary Fig. 2a). An ensemble of mean melt rates was computed, by averaging the melt rates along each profile. To estimate the melting sensitivity to changes in ocean temperatures, the model was forced with ambient ocean temperatures provided by the different CTD profiles. Other parameters and constants were adopted from22, except the entrainment coefficient, which was set to E0 = 0.016 (as opposed to E0 = 0.036), such that mean melt rates obtained from the 1998 CTD data match with the observed bulk thinning of the glacier.
Surface mass balance variability
In order to estimate the variability of the surface mass balance on the ice shelf, we calculated annual positive degree day sums41 based on air temperature observations at Danmarkshavn located about 300 km south of 79 North Glacier and Station Nord about 200 km to the North (Fig. 1 see ref.17). This simple approach will provide the temporal variability of surface melt (Supplementary Fig. 1) and thus is sufficient to estimate the relative surface mass balance changes.
Ice-dynamic glacier response
To assess the impact of the observed ice shelf thinning, the open-source 3-D thermomechanically coupled ice-flow model Elmer/Ice42 was applied to the 79 North Glacier and its upstream region. The stability of the floating ice for present day conditions is evaluated by computing the buttressing field according to the ref.43. The future evolution of the glacier system is investigated with a 100 year-long forward simulation, in which we impose a 1.5 times higher (negative) mass balance as is required for present-day steady state conditions. This forcing corresponds to the observed mean thinning rates.
As input data we use surface velocities from feature tracking results44,45, the TanDEM-X global DEM37, the ice thickness distribution from from BedMachine Greenland version 3 (see ref.46, see Supplementary Fig. 4) and a mass balance which is assimilated from ice flux divergence, as explained below. The footprint of the investigated domain is covered by a triangular regular mesh with 1 km spatial resolution.
For the buttressing field, we solve an optimization problem to infer basal friction and stiffening coefficients by matching modeled ice velocities with observed velocities (for example see ref.47,48). To avoid overfitting or over-regularization an L-curve analysis was performed to select the optimal parameters for the inversion. From the modeled velocities the 2D buttressing field is computed following43.
The calculations for the temporal evolution are based on the shallow shelf approximation (SSA, i.e., a 2D representation), using a non-linear constitutive equation (flow exponent n = 3) and a linear friction law. The ice viscosity B and the linear friction coefficient β2 are estimated deploying standard inverse methods. The defined cost function considers differences with the observed surface velocity field. To avoid flow instabilities during transient runs, the oscillations occurring in the inverted 2D viscosity field are eliminated by defining areas of equal viscosity. The geometry evolution of the glacier is calculated deploying the thickness evolution equation:
$$\frac{{\partial H}}{{\partial t}} + \nabla \cdot (\bar uH) = a_{\mathrm{S}} + a_{\mathrm{b}},$$
where H is the glacier thickness, t the time variable, \(\bar u\) the mean horizontal velocity, as and ab the surface and bottom mass balance.
The stability of Nioghalvfjerdsfjorden with respect to different states of the terminating ice shelf is investigated running the ice-flow model with a synthetic mass balance scenario. First, a 2D mass balance field ms(x, y) is estimated using the thickness evolution equation. This is achieved by introducing the velocity field calculated with the flow model in Eq. (4), imposing zero mass balance and retrieving the resulting elevation change ∂H/∂t distribution. This distribution represents the mass balance necessary to keep the glacier in steady state. Afterwards, the mass balance scenario is modified with respect to the derived thinning rates. As the steady state mass balance is about −12 m year−1 in the region of Midgardsormen in order to compensate for ice shelf convergence, the mean thinning rate of about −5.5 m year−1 represents an intensification of the mass balance by roughly a factor of 1.5. This amplifying factor is used for the scenario run, starting from the steady state as initial condition. This is compatible to the scenario of the ref.32, who increased the basal melt at the grounding line from −30 to −90 m year−1, as our mean melt rates at the grounding line for the scenario run reach about −100 m year−1.
The seismic data across Midgardsormen, the CTD profiles and TanDEM-X surface elevation profiles used in this paper are available via the Pangaea data base (https://doi.org/10.1594/PANGAEA.891369, https://doi.org/10.1594/PANGAEA.891386). The airborne radar data are available from the Microwaves and Remote Sensing Division, Danish Technical University (DTU) on request. Meteorological data used for the surface mass balance calculations are available at the Danish Meteorological Institute: http://www.dmi.dk/laer-om/generelt/dmi-publikationer/2013, technical report No. 15-08, John Cappelen (ed.), Weather observations from Greenland 1958–2014—Observation data with description. Landsat data, used for tracking Midgardsormen (Path 11, Row 02), are available from the USGS remote sensing data archive: https://glovis.usgs.gov/app?fullscreen=0. Surface elevation data for 2012 and 2014 are retrieved from the NASA Airborne Topographic Mapper (ATM) data repository: http://nsidc.org/data/ILATM2/versions/2#. The TanDEM global DEM, used for surface elevation information in the numerical modeling experiment, is available on request via a science proposal at the German Aerospace Center (DLR) only. Ice thickness information is taken from BedMachine Greenland version3: http://sites.uci.edu/morlighem/dataproducts/bedmachine-greenland/, while the surface velocity is used from http://cryoportal.enveo.at/data/ and https://nsidc.org/data/NSIDC-0670/versions/1.
Mouginot, J. et al. Fast retreat of Zachariæ Isstrøm, northeast Greenland. Science 350, 1357–1361 (2015).
Article ADS PubMed CAS Google Scholar
Yang, Q. et al. Recent increases in Arctic freshwater flux affects Labrador Sea convection and Atlantic overturning circulation. Nat. Commun. 7, 10525 (2016).
Article ADS PubMed PubMed Central CAS Google Scholar
Abdalati, W. et al. Outlet glacier and margin elevation changes: near coastal thinning of the Greenland ice sheet. J. Geophys. Res. Atmos. 106, 33729–33741 (2001).
Article ADS Google Scholar
Thomas, R., Frederick, E., Krabill, W., Manizade, S. & Martin, C. Recent changes on Greenland outlet glaciers. J. Glaciol. 55, 147–162 (2009).
Joughin, I., Smith, B., Howat, I. M., Scambos, T. & Moon, T. Greenland flow variability from ice-sheet-wide velocity mapping. J. Glaciol. 56, 415–430 (2010a).
Khan, S. A. et al. Sustained mass loss of the northeast Greenland ice sheet triggered by regional warming. Nat. Clim. Change 4, 292–299 (2014).
Mikkelsen, E. Lost in the Arctic: Being the Story of the 'Alabama' Expedition, 1909–1912, (G.H. Doran, New York, 1913).
Beszczynska-Möller, A., Fahrbach, E., Schauer, U. & Hansen, E. Variability in Atlantic water temperature and transport at the entrance to the Arctic Ocean, 1997–2010. ICES J. Mar. Sci. 69, 852–863 (2012).
Polyakov, I. V., Pnyushkov, A. V. & Timokhov, L. A. Warming of intermediate Atlantic Water of the Arctic Ocean in the 2000s. J. Clim. 25, 8362–8370 (2012).
Bekryaev, R. V., Polyakov, I. V. & Alexeev, V. A. Role of polar amplification in long-term surface air temperature variations and modern Arctic warming. J. Clim. 23, 3888–3906 (2010).
Sneed, W. A. & Hamilton, G. S. Recent changes in the Norske Øer Ice Barrier, coastal Northeast Greenland. Ann. Glaciol. 57, 47–55 (2016).
Gagliardini, O., Durand, G., Zwinger, T., Hindmarsh, R. C. A. & Le Meur, E. Coupling of ice‐shelf melting and buttressing is a key process in ice‐sheets dynamics. Geophys. Res. Lett. https://doi.org/10.1029/2010GL043334 (2010).
Mayer, C., Reeh, N., Jung‐Rothenhäusler, F., Huybrechts, P. & Oerter, H. The subglacial cavity and implied dynamics under Nioghalvfjerdsfjorden Glacier, NE−Greenland. Geophys. Res. Lett. 27, 2289–2292 (2000).
Nilsson, J., Gardner, A., Sørensen, L. S. & Forsberg, R. Improved retrieval of land ice topography from CryoSat-2 data and its impact for volume-change estimation of the Greenland Ice Sheet. Cryosphere 10, 2953–2969 (2016).
Rignot, E., Gogineni, S., Joughin, I. & Krabill, W. Contribution to the glaciology of northern Greenland from satellite radar interferometry. J. Geophys. Res. Atmos. 106, 34007–34019 (2001).
Scambos, T. A., Dutkiewicz, M. J., Wilson, J. C. & Bindschadler, R. A. Application of image cross-correlation to the measurement of glacier velocity using satellite image data. Remote Sens. Environ. 42, 177–186 (1992).
Cappelen, J. Technical Report 14-04, Greenland—DMI Historical Climate Data Collection 1784–2013—with Danish Abstracts. DMI Ministry of Climate and Energy. Copenhagen. http://www.dmi.dk/fileadmin/Rapporter/TR/tr14-04 (2014).
Bøggild, C. E., Reeh, N. & Oerter, H. Modelling ablation and mass-balance sensitivity to climate change of Storstrømmen, northeast Greenland. Glob. Planet. Change 9, 79–90 (1994).
Reeh, N. Parameterization of melt rate and surface temperature on the Greenland ice sheet. Polarforschung 59, 113–128 (1989).
Straneo, F. et al. Characteristics of ocean waters reaching Greenland's glaciers. Ann. Glaciol. 53, 202–210 (2012).
Schaffer, J. et al. Warm water pathways toward Nioghalvfjerdsfjorden Glacier, Northeast Greenland. J. Geophys. Res. Oceans 122, 4004–4020 (2017).
Jenkins, A. A one-dimensional model of ice shelf-ocean interaction. J. Geophys. Res. Oceans 96, 20671–20677 (1991).
Joughin, I., Smith, B., Howat, I. & Scambos, T. Measures Greenland Ice Velocity Map from InSAR Data. Boulder, Colorado: NASA DAAC at the National Snow and Ice Data Center. http://10.5067/MEASURES/CRYOSPHERE/nsidc-0478.001 (2010b).
Bersch, M., Yashayaev, I. & Koltermann, K. P. Recent changes of the thermohaline circulation in the subpolar North Atlantic. Ocean Dynam. 57, 223–235 (2007).
Yashayaev, I. Hydrographic changes in the Labrador Sea, 1960–2005. Prog. Oceanogr. 73, 242–276 (2007).
Williams, R. G., Roussenov, V., Smith, D. & Lozier, M. S. Decadal evolution of ocean thermal anomalies in the North Atlantic: the effects of Ekman, overturning, and horizontal transport. J. Clim. 27, 698–719 (2014).
Våge, K. et al. The Irminger Gyre: circulation, convection, and interannual variability. Deep Sea Res. 58, 590–614 (2011).
Polyakov, I. V. et al. Greater role for Atlantic inflows on sea-ice loss in the Eurasian Basin of the Arctic Ocean. Science 356, 285–291 (2017).
Schaffer, J. Ocean impact on the 79 North Glacier, Northeast Greenland. PhD Thesis, University of Bremen. http://nbn-resolving.de/urn:nbn:de:gbv:46-00106281-12 (2017).
Holland, P. R. The transient response of ice shelf melting to ocean change. J. Phys. Oceanogr. 47, 2101–2114 (2017).
Wilson, N. J. & Straneo, F. Water exchange between the continental shelf and the cavity beneath Nioghalvfjerdsbræ (79 North Glacier). Geophys. Res. Lett. 42, 7648–7654 (2015).
Article ADS CAS Google Scholar
Choi, Y., Morlighem, M., Rignot, E., Mouginot, J. & Wood, M. Modeling the response of Nioghalvfjerdsfjorden and Zachariae Isstrøm Glaciers, Greenland, to ocean forcing over the next century. Geophys. Res. Lett. 44, 11071–11079 (2017).
Lapazaran, J. J., Otero, J., Martín-Español, A. & Navarro, F. J. On the errors involved in ice-thickness estimates I: ground-penetrating radar measurement errors. J. Glaciol. 62, 1008–1020 (2016).
Krabill, W. B. IceBridge ATM L2 Icessn elevation, slope, and roughness, Version 2. Boulder, Colorado, NASA National Snow and Ice Data Center Distributed Archive Center. https://doi.org/10.5067/CPRXXK3F39RV (2014).
Rossi, C., Rodriguez-Gonzalez, F., Fritz, T., Yague-Martinez, N. & Eindeder, M. TanDEM-X calibrated Raw DEM generation. ISPRS J. Photogramm. 73, 12–20 (2012).
Wessel, B. TanDEM-X ground segment DEM products specification document. DLR Doc. TD-GS-PS-0021 3.1, Date 5.8.2016 (2016).
Rizzoli, P. et al. Generation and performance assessment of the global TanDEM-X digitale Elevation model. ISPRS J. Photogramm. 132, 119–139 (2017).
Jenkins, A. Convection-driven melting near the grounding lines of ice shelves and tidewater glaciers. J. Phys. Oceanogr. 41, 2279–2294 (2011).
Hattermann, T. Ice shelf—ocean interaction in the Eastern Weddell Sea, Antarctica. PhD Thesis, University of Tromsø. http://hdl.handle.net/10037/5147 (2012).
Schaffer, J. et al. A global, high-resolution data set of ice sheet topography, cavity geometry, and ocean bathymetry. Earth Syst. Sci. Data 8, 543 (2016).
Braithwaite, R. J. Positive degree-day factors for ablation on the Greenland ice sheet studied by energy-balance modelling. J. Glaciol. 41, 153–160 (1995).
Gagliardini, O. et al. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model. Geosci. Model Dev. 6, 1299–1318 (2013).
Fürst, J. J. et al. The safety band of Antarctic ice shelves. Nat. Clim. Change 6, 479–482 (2016).
Nagler, T., Rott, H., Hetzenecker, M., Wuite, J. & Potin, P. The Sentinel-1 mission: new opportunities for ice sheet observations. Remote Sens. 7, 9371–9389 (2015).
Joughin, I., Smith, B. & Howat, I. A complete map of Greenland ice velocity derived from satellite data collected over 20 years. J. Glaciol. 64, 1–11 (2017).
Morlighem, M. et al. BedMachine v3: complete bed topography and ocean bathymetry mapping of Greenland from multibeam echo sounding combined with mass conservation. Geophys. Res. Lett. 44, 1–11 (2017).
Fürst, J. et al. Assimilation of Antarctic velocity observations provides evidence for uncharted pinning points. Cryosphere 9, 1427–1443 (2015).
Cornford, S. L. et al. Century-scale simulations of the response of the West Antarctic Ice Sheet to a warming climate. Cryosphere 9, 1579–1600 (2015).
The TanDEM-X global DEM tiles and CoSSC were provided by DLR under the research proposals DEM_GLAC0671 and XTI_GLA6663 ©DLR 2017. The work was partly supported by Deutsche Forschungsgemeinschaft (DFG, FL 848/1-1). Surface-elevation measurements were provided by NASA's Airborne Topographic Mapper (ATM) Program.This work was supported in part through grant (OGreen79) from the Deutsche Forschungsgemeinschaft (DFG) as part of the Special Priority Program (SPP)-1889 "Regional Sea Level Change and Society" (SeaLevel). C.S. was supported by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the priority programme "Antarctic Research with comparative investigations in Arctic ice areas" by the grant MA 3347/10-1. The Microwaves and Remote Sensing Division, Danish Technical University (DTU) is acknowledged for providing the airborne ice thickness data.
Bavarian Academy of Sciences and Humanities, Alfons-Goppel Str. 11, 80539, Munich, Germany
Christoph Mayer & Carlo Licciulli
Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Am Handelshafen 12, 27570, Bremerhaven, Germany
Janin Schaffer, Tore Hattermann & Torsten Kanzow
Akvaplan-niva AS, Fram Centre, Postbox 6606, 9296, Langnes, Tromsø, Norway
Tore Hattermann
Remote Sensing Technology Institute, German Aerospace Centre (DLR), Münchener Straße 20, 82234, Oberpfaffenhofen, Weßling, Germany
Dana Floricioiu & Lukas Krieger
Norwegian Polar Institute, Fram Centre, Postbox 6606, 9296, Langnes, Tromsø, Norway
Paul A. Dodd
University of Tübingen, Geologie & Geodynamik, Wilhelmstraße 56, 72074, Tübingen, Germany
Clemens Schannwell
Christoph Mayer
Janin Schaffer
Dana Floricioiu
Lukas Krieger
Torsten Kanzow
Carlo Licciulli
C.M. conducted field work, conceived the research and wrote the article; J.S. and T.H. analysed the oceanographic data and performed the plume modeling; D.F. and L.K. processed and analysed the TanDEM-X data; P.A.D. provided CTD measurements and input to the oceanographic discussion; T.K. discussed the oceanographic part and analysed the melt impact; C.S. and C.L. calculated the buttressing and performed the ice-dynamic model experiments. All authors contributed to the writing of the manuscript and the revisions.
Correspondence to Christoph Mayer.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Mayer, C., Schaffer, J., Hattermann, T. et al. Large ice loss variability at Nioghalvfjerdsfjorden Glacier, Northeast-Greenland. Nat Commun 9, 2768 (2018). https://doi.org/10.1038/s41467-018-05180-x
Received: 22 November 2017
This article is cited by
Petermann ice shelf may not recover after a future breakup
Henning Åkesson
Mathieu Morlighem
Martin Jakobsson
Nature Communications (2022)
Vertical redistribution of principle water masses on the Northeast Greenland Shelf
Caroline V. B. Gjelstrup
Mikael K. Sejr
Colin A. Stedmon
The 79°N Glacier cavity modulates subglacial iron export to the NE Greenland Shelf
Stephan Krisch
Mark James Hopwood
Eric Pieter Achterberg
How Different Analysis and Interpolation Methods Affect the Accuracy of Ice Surface Elevation Changes Inferred from Satellite Altimetry
Undine Strößenreuther
Martin Horwath
Mathematical Geosciences (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Reviews & Analysis
Editorial Values Statement
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
Shear strain concentration mechanism in the lower crust below an intraplate strike-slip fault based on rheological laws of rocks
Xuelei Zhang1 &
Takeshi Sagiya2
Earth, Planets and Space volume 69, Article number: 82 (2017) Cite this article
We conduct a two-dimensional numerical experiment on the lower crust under an intraplate strike-slip fault based on laboratory-derived power-law rheologies considering the effects of grain size and water. To understand the effects of far-field loading and material properties on the deformation of the lower crust on a geological time scale, we assume steady fault sliding on the fault in the upper crust and ductile flow for the lower crust. To avoid the stress singularity, we introduce a yield threshold in the brittle–ductile transition near the down-dip edge of the fault. Regarding the physical mechanisms for shear strain concentration in the lower crust, we consider frictional and shear heating, grain size, and power-law creep. We evaluate the significance of these mechanisms in the formation of the shear zone under an intraplate strike-slip fault with slow deformation. The results show that in the lower crust, plastic deformation is possible only when the stress or temperature is sufficiently high. At a similar stress level, \(\sim\)100 MPa, dry anorthite begins to undergo plastic deformation at a depth around 28–29 km, which is about 8 km deeper than wet anorthite. As a result of dynamic recrystallization and grain growth, the grain size in the lower crust may vary laterally and as a function of depth. A comparison of the results with constant and non-constant grain sizes reveals that the shear zone in the lower crust is created by power-law creep and is maintained by dynamically recrystallized material in the shear zone because grain growth occurs in a timescale much longer than the recurrence interval of intraplate earthquakes. Owing to the slow slip rate, shear and frictional heating have negligible effects on the deformation of the shear zone. The heat production rate depends weakly on the rock rheology; the maximum temperature increase over 3 Myr is only about several tens of degrees.
Ductile shear zones are believed to exist in the lower crust below interplate strike-slip faults on the basis of various observational, experimental, and theoretical studies as well as geological observations of exhumed shear zones. Thermal weakening due to shear heating has been considered as an important process for the development and maintenance of shear zones (e.g., Yuen et al. 1978; Fleitout and Froidevaux 1980). Observation of the broadly distributed heat flow anomaly on the San Andreas Fault (see Lachenbruch and Sass 1980) has been explained by shear heating in the lower crust. The temperature anomaly in the lower crust can reach several hundred degrees, which can create an observable heat flow anomaly on the surface (e.g., Thatcher and England 1998; Leloup et al. 1999; Takeuchi and Fialko 2012). A large temperature anomaly can result in a weak zone with low seismic velocity that can be observed as a heterogeneous velocity structure in the seismic tomography data (Wittlinger et al. 1998). Furthermore, mylonite outcrops of exhumed faults (White et al. 1980) provide direct evidence for the existence of ductile shear zones in the lower crust under interplate (e.g., Rutter 1999; Little et al. 2002) and intraplate faults (e.g., Shimada et al. 2004; Fusseis et al. 2006; Takahashi 2015).
Compared with interplate faults, intraplate strike-slip faults have much smaller slip rates, at <1 mm/year, and their age is much younger in the Japanese Islands (less than 3 Myr; Doke et al. 2012). However, heterogeneous structures beneath intraplate strike-slip faults observed by seismic tomography (e.g., Nakajima and Hasegawa 2007; Nakajima et al. 2010) and magnetotelluric survey (e.g., Ogawa and Honkura 2004; Yoshimura et al. 2009) suggest the existence of localized weak zones in the lower crust just below intraplate active faults (Iio et al. 2002, 2004). The spatial resolution of these observations is insufficient to resolve the structures of such ductile shear zones. Therefore, understanding the mechanisms that lead to shear strain concentration in the lower crust beneath an intraplate strike-slip is an important step in understanding the deformation of the crust.
In this study, we construct a series of numerical models on the deformation in the lower crust below an active intraplate strike-slip fault based on laboratory-derived rheological laws. We simulate the evolution of viscosity and deformation patterns of the lower crust beneath an immature intraplate strike-slip fault on a geological timescale. We consider three mechanisms of strain localization: shear and fault frictional heating, grain size reduction, and power-law creep. The effect of water is quantitatively evaluated with water fugacity. We discuss the role of shear strain concentration mechanisms and boundary conditions in the development of the shear zone. In addition, we compare the shear zones beneath intraplate and interplate strike-slip faults to identify the controlling factors for lower crustal shear localization under intraplate strike-slip faults.
We simulated the deformation of the lower crust beneath an intraplate strike-slip fault by applying a velocity boundary condition representing far-field loading. We solved the stress equilibrium equation and the heat flow equation for a thermo-mechanical coupled model, and we used laboratory-derived rheological laws to control the behavior of rocks.
Model geometry
The model domain is 35 km thick in the vertical (z) direction and 30 km wide in the fault-normal (x) direction. The Mohorovičić (Moho) discontinuity is represented by a horizontal boundary at a depth of 35 km. Following Thatcher and England (1998), we considered the problem in a 2-D plane perpendicular to the fault trace, as shown in Fig. 1. We assumed two layers: a rigid upper crust and a ductile lower crust, and the entire crust is composed of wet or dry anorthite. In the upper crust where brittle failure is the dominant mode of deformation, an infinitely long vertical creeping fault is assumed with the fault strike parallel to the y-axis. The lower crust is deformed by plastic flow, and there is a semi-brittle regime between the upper and the lower crust. The lower boundary of the semi-brittle regime is the brittle–ductile transition (BDT), the depth of which depends on the assumption of crustal rheology (Table 1). Considering the symmetry of the vertical strike-slip fault, our model region includes only one side of the fault bounded by the surface and a vertical plane of bilateral symmetry, which is taken to be the center of the shear zone.
Table 1 Model configurations
The constitutive relation for the plastic flow of rocks is described as follows (e.g., Bürgmann and Dresen 2008):
$$\begin{aligned} \dot{\varepsilon } = A \tau _{{\rm s}}^{n} L^{-m} f_{{\rm H_{2}O}}^{r} {{\rm exp}}\left(-\dfrac{Q+pV}{RT} \right) , \end{aligned}$$
where \(\tau _{\rm s}\) is the maximum shear stress given by the square root of the second deviatoric stress invariant. L is the grain size. \(f_{{\rm H_{2}O}}\) is water fugacity. Q and V are activation energy and activation volume, respectively. R is the universal gas constant. p is pressure, and A, n, m, r are material constants. The laboratory-derived parameters for anorthite are summarized in Table 2. Regarding the physical mechanism of plastic flow, in this study, we considered both diffusion creep and dislocation creep. For a given mineral, we assume that the same shear stress controls the two deformation mechanisms (e.g., Gueydan et al. 2001; Montési and Hirth 2003). Under this assumption, the total strain rate \(\dot{\varepsilon }_{\rm total}\) is expressed as the sum of the diffusion creep strain rate \(\dot{\varepsilon }_{\rm diff}\) and the strain rate caused by dislocation creep \(\dot{\varepsilon }_{\rm disl}\).
$$\begin{aligned} \dot{\varepsilon }_{{\rm total}}=\dot{\varepsilon }_{{\rm diff}}+ \dot{\varepsilon }_{{\rm disl}} \end{aligned}$$
One can define the effective viscosity, such that
$$\begin{aligned} \eta _{{\rm eff}}=\dfrac{\tau _{s}}{\dot{\varepsilon }_{{\rm total}}}. \end{aligned}$$
The grain size in this study is assumed following the model proposed by Bresser et al. (1998), who argued that grain growth occurs in the diffusion creep regime to increase the grain size to a size sufficient for dislocation creep to occur and dynamic recrystallization in dislocation creep regime leads to a grain size small enough for diffusion creep to occur. They postulated that the grain size is determined by the equation for Equilibrium Grain Size (\(L_{{\rm EGS}}\)):
$$\begin{aligned} \dot{\varepsilon }_{{\rm diff}}(T,p,\tau ,L)=\dot{\varepsilon }_{{\rm disl}}(T,p,\tau ) \end{aligned}$$
where T is temperature, p is pressure,\(\tau\) is shear stress, and L is grain size. Combining Eqs. 1 and 4, we can obtain the expression for \(L_{{\rm EGS}}\), which is a function of temperature and shear stress:
$$\begin{aligned} L_{{\rm EGS}}=\left[ \dfrac{A_{{\rm diff}}}{A_{{\rm disl}}\tau ^{n_{{\rm disl}}-1}_{s}} {{\rm exp}}\left(\dfrac{Q_{{\rm disl}}+pV_{{\rm disl}}-Q_{{\rm diff}}-pV_{{\rm diff}}}{RT}\right) \right] ^{\dfrac{1}{m_{{\rm diff}}}} \end{aligned}$$
The subscript diff and disl refer to the rheological parameters for diffusion creep and dislocation creep in Table 2. From this assumption, we expect a large variation in grain size under the thermal and stress conditions of the lower crust (Fig. 2). We also tested a case of a Constant Grain Size (\(L_{\rm CGS}\)) of 500 \(\upmu {\mathrm{m}}\) for comparison.
Contours of equilibrium grain size as a function of temperature and stress, assuming wet anorthite rheology
For wet rheology (r = 1), the effect of water weakening is evaluated with water fugacity \(f_{{\rm H_{2}O}}\). The fugacity of a gaseous species at any temperature (T) and pressure (p) can be calculated from the equation of state using the following equation (Karato 2012):
$$\begin{aligned} \log \dfrac{f(p,T)}{p}=\dfrac{1}{RT} \lim _{p_{0} \rightarrow 0} \int _{p_{0}}^{p} \left(V_{{\rm m}}(p^{\prime},T)-V_{{\rm m}}^{{\rm id}}(p^{\prime},T)\right) dp^{\prime}. \end{aligned}$$
where \(V_{\rm m}\) and \(V_{\rm m}^{\rm id}\) is molar volume of an real gas and an ideal gas, respectively. For real gas, we use van der Waals equation of state: \(p=\dfrac{RT}{V_{{\rm m}}-b}-\dfrac{a}{V_{{\rm m}}^{2}}\). The van der Waals constants a and b of water (\({\mathrm{H}}_2\mathrm{O}\)) are \(5.537\times 10^{-1} \mathrm{m}^{6}\,\mathrm{Pa\, mol}^{-2}\) and \(3.049 \times 10^{-5} \mathrm{m}^{3}\,\mathrm{mol}^{-1}\), respectively. \(V_{\rm m}\) in term \(\dfrac{a}{V_{m}^{2}}\) can be approximated as \(\dfrac{RT}{p}\) as ideal gas. Then, \(V_{{\rm m}}\) can be calculated as \(V_{{\rm m}}=\dfrac{R^{3}T^{3}}{pR^{2}T^{2}+ap^{2}}+b.\) Integrating Eq. 6 using the equation of state for real gas and ideal gas and substitute p and \(p_{0}\) for \(p^{\prime}\). Let \(p_{0}=0\), one obtains the expression for fugacity,
$$\begin{aligned} f(p,T)=\dfrac{pR^{2}T^{2}}{R^{2}T^{2}+{{\rm ap}}}{{\rm exp}}\left(\dfrac{bp}{RT}\right) . \end{aligned}$$
Table 2 Rheological properties of rocks from laboratory measurements (Rybacki et al. 2006)
Initial and boundary conditions
Because we consider an infinitely long strike-slip fault that cuts through the entire upper crust and terminates in the lower crust, there is no vertical motion. Far-field horizontal velocity \(v_{0}\) is half of the total relative velocity. \(v_{0}\) is assumed to be 0.5 and 15 mm/year for intraplate and interplate faults, respectively, and it is applied from surface to the depth of \(z_{{\rm b}}\) and on the far-field boundaries. We assume the fault strength in the brittle fracture regime on the basis of Byerlee's law (Byerlee 1978):
$$\begin{aligned} \tau _{f} = {\left\{ \begin{array}{ll} 0.85 \sigma _{{\rm n}} &{} (\sigma _{{\rm n}}<200\, [\text{MPa}]) \\ 50+0.6\sigma _{{\rm n}} &{}(200\, [\text{MPa}]<\sigma _{{\rm n}}<1700\, [\text{MPa}]), \end{array}\right. } \end{aligned}$$
where \(\tau _{{\rm f}}\) is frictional strength and \(\sigma _{{\rm n}}\) is normal stress. The strength of a material in the plastic flow regime is highly sensitive to temperature, as shown in Eq. 1. In this model, we assume that the brittle fracture and plastic flow occur independently; as a result, the mechanism that gives a lower strength becomes the dominant mechanism of deformation. The transition conditions for brittle fracture to plastic flow (brittle–ductile transition, BDT) are given by
$$\begin{aligned} \tau _{yx}=\dfrac{1}{2}\eta _{{\rm eff}}\dfrac{\partial v}{\partial x}= \tau _{{\rm f}}. \end{aligned}$$
Shear stress \(\tau _{yx}\) is solved from the model of plastic flow using different model configurations (Table 1), and \(\tau _{{\rm f}}\) is the fault frictional strength. The shear strain rate (\(\dot{\varepsilon }_{yx}\)) is solved from Eq. 9. At the depth shallower than the depth of BDT, we apply stress boundary condition on the fault that the flow stress is equal to the fault frictional strength (Fig. 3b). The fault gradually terminates as slip decreases with depth. At the depth of BDT, the slip rate is 0. Slip rate at semi-brittle regime can be calculated by the integral of the shear strain rate (\(\dot{\varepsilon }_{yx}\)) over the entire domain in the x-direction. At depths greater than the BDT, no brittle fracture occurs, and the deformation is fully plastic. The velocity on the vertical plane of bilateral symmetry is zero. On the crust/mantle boundary, the boundary condition is \({{\rm d}}v/{{\rm d}}z = 0\).
a Fault slip velocity (v) and b shear stress on the bilateral symmetry line in the lower crust for model W1E. \(\tau _{s}\) is the second deviatoric stress invariant; \(\tau _{yx}\) and \(\tau _{yz}\) are the yx and yz components of shear stress, respectively; and the straight broken line is based on Byerlee's law. The depth of the BDT is shown by black arrows
The initial temperature is assumed with a uniform thermal gradient of 25 K/km (Table 3). The temperature of the Earth's surface is fixed to 0 °C. Zero heat flux at the vertical boundaries and a constant heat flux (\(0.065\,\hbox{W m}^{-2}\)) at the Moho is assumed.
Thermo-mechanical coupling model
In our model, all mechanical energy is dissipated in heat and represents a source term in the heat flow equation:
$$\begin{aligned} \rho C_{{\rm p}} \dfrac{\partial T}{\partial t}=k \left( \dfrac{\partial ^{2} T}{\partial x^{2}}+\dfrac{\partial ^{2}T}{\partial z^{2}} \right) +H_{{\rm s}}+H_{{\rm f}}, \end{aligned}$$
where the change in temperature T is a summation of thermal diffusion (k is the thermal conductivity) and volumetric heat generated by shear heating (\(H_{{\rm s}}\)) and frictional heating (\(H_{{\rm f}}\)). \(\rho\) is the density, and \(C_{{\rm p}}\) is the specific heat capacity at constant pressure. The heat produced by shear per unit time and volume is given by
$$\begin{aligned} H_{{\rm s}}=\tau _{ij}\dot{\varepsilon }_{ij}. \end{aligned}$$
The heat produced by friction is approximated by the volumetric heating on a column of the grid closest to the fault (Leloup et al. 1999):
$$\begin{aligned} H_{{\rm f}}=\tau _{{\rm f}}\dfrac{v_{0}}{\Delta x}, \end{aligned}$$
where \(\tau _{{\rm f}}\) is the frictional resistance defined in Eq. 8 and \(\Delta x\) is the width along the x-axis of the considered unit cell. We solved the heat flow equation in a 2-D space perpendicular to the fault (Fig. 1). In each time step, we assumed that the velocity is constant in time, and we solved the stress equilibrium equation for the velocity field. Because motion is purely horizontal, the only nonzero components of the stress are \(\tau _{yx}\) and \(\tau _{yz}\):
$$\begin{aligned} \dfrac{\partial \tau _{yx}}{\partial x}+\dfrac{\partial \tau _{yz}}{\partial z} =0. \end{aligned}$$
All numerical calculations in this study were performed by using MATLAB. We used the Partial Differential Equation Toolbox to solve the mechanical equations, and we used the Alternating Direction Implicit finite difference method to solve the heat flow equation. The calculations were performed on a grid containing 700 \(\times\) 600 (420,000) cells, each of which is 50 m in both of its width and height directions. Although a finer grid could give a more accurate solution, the overall pattern of the solutions is insensitive to the chosen grid size, as confirmed by simulations using a finer grid with a half grid size. We simulated the fault slip and temperature evolution by using the adaptive time step (e.g., Thatcher and England 1998) controlled by the amount of heat production. We calculated the temperature rise during 3 Myr because the initiation ages of active faulting in the inland areas of Japan are mostly less than 3 Myr (Doke et al. 2012).
Table 3 Thermal and mechanical parameters
Effective viscosity of a rock depends on several environmental conditions such as shear stress, grain size, and temperature. In this section, we present the calculation results of shear stress and grain size distribution obtained by applying a 1-D linear geothermal gradient to evaluate the effects of grain size and power-law rheology. Moreover, we show the temperature anomaly produced by shear and frictional heating and the effective viscosity distribution.
Shear stress
As shown in Fig. 3b, the shear stress \(\tau _{yz}\) becomes very large (\({>}700\) MPa) around point \(x = 0, z = z_{b}\). This is because the effective viscosity is extremely large in the semi-brittle regime and the elasticity of rock has not been considered in this study. As the depth increases, \(\tau _{yz}\) quickly decreases. At the depth greater than the depth of BDT, \(\tau _{yz}\) becomes negligible compared with \(\tau _{yx}\), and the maximum shear stress \(\tau _{{\rm s}}\) is nearly equal to \(\tau _{yx}\). Therefore, the distribution of maximum shear stress in the lower crust below the BDT is considered to be a result of far-field loading and we focus our discussion to the lower crust below BDT.
Contours of shear stress (\(\tau _{s}\)) in the lower crust as a function of depth and distance from the fault for models a W1E, b D1E, c W30E, d W1C, e D1C, and f W30C. The thickness of the lower crust and the depth of the BDT are dependent on the assumed rheology and far-field velocity (\(v_{0}\)). The gray broken lines represent the BDT depth
Figure 4 shows the distribution of the maximum shear stress in the lower crust for our 6 cases. The depth of the BDT is different in each case, as is shown by gray broken lines in Figure 4. Compared with the wet anorthite, the dry anorthite requires a higher temperature to cause plastic deformation. The brittle region extends deeper into the crust (28–29 km depth), and the BDT for the dry anorthite case is about 8 km deeper than the cases of wet anorthite. Therefore, \(z_{{\rm b}}\) was set at a depth of 25 km for the model with dry anorthite. In the case of interplate strike-slip faults (Fig. 4c, f), the shear stress is only slightly larger, and the BDT is about 2 km deeper than that for intraplate cases (Fig. 4a, d). However, the slip rate of an interplate strike-slip fault is 30 times larger than that of an intraplate strike-slip fault. Therefore, the shear stress in the lower crust and the depth of the BDT is not sensitive to the fault slip rate. Shear stress concentrates around the down-dip extension of the fault. The largest shear stress is located at the depth of the BDT. Shear stress drops with depth and distance from the fault.
Grain size distribution
We calculated \(L_{{\rm EGS}}\) by balancing the shear strain rate of diffusion creep and dislocation creep. As examples, \(L_{{\rm EGS}}\) obtained by the model W1E and model D1E with an initial temperature field is shown in Fig. 5. Small grains are located in highly sheared region because both \(L_{{\rm EGS}}\) and shear strain rate depend on temperature and shear stress (Eq. 5 and Fig. 4). In our models, the minimum grain sizes are located at the depth of BDT under the fault where shear stress becomes the largest, nearly equal to the frictional strength of the fault. In models W1E and D1E, the minimum grain sizes are \(\sim\)215 and \(\sim\)17 \(\upmu \mathrm{m}\) at temperatures of \(\sim\)475 and \(\sim\)700 \(^\circ\)C, respectively. The results of grain size measurements show that the plagioclase grains in ultramylonites have a mean diameter of 16 (Okudaira et al. 2015) and 85 \(\upmu \mathrm{m}\) (Okudaira et al. 2017) under the condition of \(\sim\)700 and \(\sim\)600 \(^\circ\)C, respectively. Although our results of EGS are in agreement with these observations, comparison of the calculated results with the field observations is not straightforward. For example, the shear stress on the fault could be smaller than that estimated from Byerlee's law (Iio 1997). Also, even with the same temperature and stress conditions, dynamically recrystallized grain size may be still larger than \(L_{{\rm EGS}}\) (Bresser et al. 2001).
Outside the narrow mylonite zone, materials composed of relatively coarse grain size (up to few centimeters) are widely exposed over wide area (e.g., Markl 1998). Our calculation with EGS provides a fairly reasonable grain size distribution. However, in the far field where both temperature and shear stress is low, calculated \(L_{{\rm EGS}}\) reaches several tens of centimeters, which is not realistic. This result may be ascribed to our assumptions of instantaneous grain growth following the equation for \(L_{{\rm EGS}}\). The mechanisms that limit grain size, such as the Zener pinning effect (e.g., Hillert 1988; Rohrer 2010), are not considered in this study.
Shear and frictional heating
Figure 6 shows temperature anomalies of 3 Myr after shearing and fault sliding were initiated. Assuming wet anorthite rheology for the lower crust, the maximum temperature increases for models W1E and W30E are about 15 and 219 K, respectively. The temperature increase for the case of an intraplate strike-slip fault is much lower than that for an interplate strike-slip fault. The temperature change is largely affected by frictional heating. Temperature rise creates a peak of heat flow anomaly on the fault trace. For an interplate strike-slip fault, the peak heat flow anomaly is \(\sim\)55 \(\hbox{mW/m}^2\) above the background heat flow which is 65 \(\hbox{mW/m}^2\) (Fig. 7b). On the contrary, for an intraplate strike-slip fault, the expected heat flow anomaly is very small, less than 5% of the background value. Therefore, we cannot expect to detect a heat flow anomaly for the intraplate case (Tanaka et al. 2004). To illustrate how rock rheology affects the temperature increase, we also performed a calculation using dry anorthite (strong rheology). Figure 6b shows that the maximum temperature increase for the D1E model is about 22 K, which is higher than that for the wet anorthite case but still insufficient for causing an observable heat flow anomaly at the surface.
Contours of equilibrium grain size distribution as a function of depth and distance from the fault, calculated from model W1E (a) and D1E (b). The gray broken line represents the BDT depth
Temperature anomaly produced by shear and frictional heating versus depth and distance from the fault for models a W1E, b D1E and c W30E after 3 Myr of fault sliding. "\(\times\)" shows the location of maximum temperature increase; numbers indicate the magnitude of maximum temperature increase
a Surface heat flow for simulation of interplate (broken line) and intraplate (solid line) strike-slip faults. b Enlarged view of the fault core, the sharp peak above the fault (\(x = {\sim}0\)) was created by fault frictional heating
Effective viscosity
The effective viscosity structure strongly depends on assumptions applied for calculation, as shown in Fig. 8. For intraplate cases, the effective viscosity is about \(10^{22.5}\) Pa s at the BDT under the fault. For the interplate case, in which the shear strain rate and shear stress are higher than those in the intraplate cases, the effective viscosity (Fig. 8c) becomes as small as about \(10^{21}\)Pa s at the BDT under the fault.
Effective viscosity versus depth and distance from the fault for equilibrium grain size (a–c) and constant grain size (d–f). The white broken line in d–f indicates the location in which diffusion creep and dislocation creep have the same slip rate
The effective viscosity of dislocation creep is extremely high when stress is relatively small. In models assuming EGS, dislocation creep and diffusion creep had the same effective viscosity in our calculation. The effective viscosity at the far field and at the top of the lower crust is larger than \(10^{25}\) Pa s because of the relatively low temperature and small stress. In these regions, rocks behave like a rigid body.
On the other hand, in models assuming CGS, diffusion creep becomes the dominate deformation mechanism where stress is relatively small. Owing to the linear geothermal gradient, the effective viscosity has a layered structure in the far field. In the shear zone where the stress is large, dislocation creep dominates. The broken lines in Fig. 8d–f show the location in which dislocation creep and diffusion creep with a grain size of 500 \(\upmu \mathrm{m}\) have equal contribution. Dislocation creep dominates on the left side, and diffusion creep dominates on the right side of the broken line.
A comparison of wet and dry anorthite shows that the effective viscosity is significantly lowered by the present of water, whereas in previous studies of interplate strike-slip faults (e.g., Takeuchi and Fialko 2013; Moore and Parsons 2015), due to the elevated temperature field, the effective viscosity for wet and dry rheologies has similar magnitude in the center of the shear zone. This is not the case in the intraplate strike-slip fault, because change in effective viscosity structure due to shear and frictional heating is negligible.
In this section, we discuss the relative importance of candidate mechanisms for the formation and maintenance of the shear zone in the lower crust beneath an intraplate strike-slip fault.
Shear, as well as frictional heating, has been considered as a main cause of the lower crustal shear zone beneath a fault and the associated heat flow anomaly for interplate strike-slip faults such as the San Andreas Fault (e.g., Lachenbruch and Sass 1980; Leloup et al. 1999). We compared the shear strain rate obtained from a temperature field of 3 Myr (solid line in Fig. 9) and that obtained from an initial temperature field (broken line in Fig. 9). For the interplate strike-slip fault, a significant increase in temperature occurred around the fault tip at a depth of about 12 km (Fig. 6c). Our result of temperature increase in model W30E is consistent with the results of recent thermo-mechanical models of interplate strike-slip faults (e.g., Takeuchi and Fialko 2012; Moore and Parsons 2015). The maximum temperature increase in the cases of wet rheologies is \(\sim\)200 °C, and the effective viscosity was significantly lowered by the increased temperature. A comparison with the shear strain rate with the 1-D linear geothermal gradient revealed that the shear zone became narrower and the depth of BDT became shallower (Fig. 9b) after temperature is increased, which indicates that the depths of BDT for interplate strike-slip faults are time dependent.
On the contrary, for the case of the intraplate strike-slip fault (Fig. 9a), the shear strain rate change during 3 Myr was negligible because the temperature increase was minimal (\(\sim\)20 /650 K). So shear and frictional heating on long-term (geological time scale) thermal structures is negligible for intraplate strike-slip faults. We conclude that such heating is not the main cause of the formation of shear zone under intraplate faults.
The amount of heat generated by shear and frictional heating can be increased by the absence of water. In the previous studies (e.g., Takeuchi and Fialko 2012; Moore and Parsons 2015), the temperature increase in the cases of dry rheologies is about 200 °C higher than that in the cases of wet rheologies. In our study, the effect of water on temperature increase is not significant because the maximum shear strain rate (Fig. 10) and shear stress (Fig. 4) is insensitive to the rock rheology. Instead, the depth increase in BDT due to the absence of water is ~8 km, which is equivalent to a temperature increase of ~200 °C.
Shear strain rate changed by increased temperature for a W1E and b W30E. The broken contour lines show the shear strain rate with the temperature field of a 1-D geothermal gradient, and the solid contour lines show the shear strain rate with the temperature field from a 3 Myr simulation. Gray broken lines and solid lines represent the BDT depth at t = 0 and at t = 3Myr, respectively
In the current model, the degree of shear strain concentration was influenced by the assumption of rheology. Deformation was more localized in the cases of power-law fluid (Fig. 10a, b) than in the case of Newtonian fluid (Fig. 10c). A comparison of the results of models W1E and W1C revealed that shear strain rate distributions are similar in the shear zone, implying that in the current study, the assumption of grain size dose not affect shear strain concentration. Therefore, weakening due to power-law rheology is the most important mechanism in the formation of the shear zone in the lower crust. However, it should be noted that we only consider diffusion creep as a grain size dependent creep in this study. In the fine grained mylonites, deformation mechanisms other than diffusion creep, such as grain boundary sliding (Boullier and Gueguen 1975; White 1979) could occur to further reduce the strength of rocks and enhance shear strain localization.
Shear strain rate for models a W1E, b W1C, d D1E and e D1C. c Linear model deformed by diffusion creep has a constant grain size of 215 \(\upmu \mathrm{m}\) (minimum grain size of model W1E)
Once a shear zone has been formed in the lower crust, the strength heterogeneity produced by the material with small grain sizes will remain over a geological time scale (~108 years, Tullis and Yund 1982). Commonly observed mylonite near exhumed shear zones (White et al. 1980) shows evidence for these long-lived weak zones beneath intraplate faults. Thus, lower shear strengths are maintained by materials with small grain sizes and strain localization should be a common feature for many active faults.
In the far field, although the shear strain rate in the W1C model was larger than that in W1E, the shear stress in the cases of CGS is smaller than that for the cases of EGS because the effective viscosity is significantly lowered by the diffusion creep. Because the shear strain rate in the far field is much smaller than that in the shear zone, the deformation in the far field has almost no influence on the deformation in the localized shear zone.
A simplifying assumption in our calculation is that EGS is achieved instantaneously, which may not be realistic. According to the model of Bresser et al. (1998), grain size evolves toward EGS depending on the strain rate at each location. Since the strain rate distribution in our calculation does not significantly change with time, the resulting EGS can be considered as the result of long-term steady-state deformation. Our results demonstrate that a relative motion across an intraplate fault, no matter how slow it moves, can create characteristic grain size distribution and corresponding strain localization in the lower crust. The model also predicts that lower crustal rocks in the far field should be like a rigid body. Studies of post-seismic deformation showed that plastic flow in the lower crust after the 1992 Landers and 1999 Hector Mine earthquakes was not significant (Pollitz 2001; Freed et al. 2007). Our result of the effective viscosity structure with the EGS assumption is in good agreement with such observation because in that case, the plastic deformation is limited in a narrow shear zone under the fault.
For interplate strike-slip faults, Savage and Burford (1973) proposed a kinematic model with a buried dislocation in an elastic half-space; this model has been used to explain geodetically observed interseismic strain accumulation. For intraplate strike-slip faults, a similar dislocation model has been applied and yielded a reasonable estimate of the fault-locking depth (e.g., Ohzono et al. 2011). The current model demonstrates that such a localized shear zone appears even in an intraplate case with a very low slip rate. This provides a physical basis for applicability of the Savage and Burford (1973) model to intraplate strike-slip faults.
We have considered the formation and maintenance of the shear zone under an intraplate strike-slip fault. Models that incorporate laboratory-derived temperature-dependent power-law rheology, grain size, and shear and frictional heating are examined to understand the mechanism and boundary conditions that influence the deformation of the lower crust. Water is very important to reduce the temperature requirement for plastic deformation in the lower crust, as for wet anorthite, deformation is fully plastic at temperature of \(\sim\)475 °C, whereas for dry anorthite is \(\sim\)700 °C. The temperature anomaly owing to 3 Myr of heat generation on an intraplate strike-slip fault is negligibly small. In our model, dynamically recrystallized materials with small grain sizes are important for maintaining a shear zone on a geological time scale of \({\sim}10^{8}\) years. The degree of shear strain concentration is controlled by the weakening effect due to nonlinear relation between shear strain rate and stress (power-law rheology).
BDT:
brittle–ductile transition
diff.:
diffusion creep
disl.:
dislocation creep
EGS:
equilibrium grain size
CGS:
constant grain size
Boullier A, Gueguen Y (1975) Sp-mylonites: origin of some mylonites by superplastic flow. Contrib Mineral Petrol 50(2):93–104
Bürgmann R, Dresen G (2008) Rheology of the lower crust and upper mantle: evidence from rock mechanics, geodesy, and field observations. Ann Rev Earth Planet Sci 36(1):531
Byerlee JD (1978) Friction of rocks. Pure Appl Geophys 116(4–5):615–626
De Bresser J, Peach C, Reijs J, Spiers C (1998) On dynamic recrystallization during solid state flow: effects of stress and temperature. Geophys Res Lett 25(18):3457–3460
De Bresser J, Ter Heege J, Spiers C (2001) Grain size reduction by dynamic recrystallization: Can it result in major rheological weakening? Int J Earth Sci 90(1):28–45
Doke R, Tanikawa S, Yasue K, Nakayasu A, Niizato T, Umeda K, Tanaka T (2012) Spatial patterns of initiation ages of active faulting in the Japanese Islands. Active Fault Res 37:1–15
Fleitout L, Froidevaux C (1980) Thermal and mechanical evolution of shear zones. J Struct Geol 2(1–2):159–164
Freed AM, Bürgmann R, Herring T (2007) Far-reaching transient motions after mojave earthquakes require broad mantle flow beneath a strong crust. Geophys Res Lett 34(19):L19302
Fusseis F, Handy M, Schrank C (2006) Networking of shear zones at the brittle-to-viscous transition (cap de creus, ne spain). J Struct Geol 28(7):1228–1243
Gueydan F, Leroy YM, Jolivet L (2001) Grain-size-sensitive flow and shear-stress enhancement at the brittle-ductile transition of the continental crust. Int J Earth Sci 90(1):181–196
Hillert M (1988) Inhibition of grain growth by second-phase particles. Acta Metall 36(12):3177–3181
Iio Y (1997) Frictional coefficient on faults in a seismogenic region inferred from earthquake mechanism solutions. J Geophys Res Solid Earth 102(B3):5403–5412
Iio Y, Sagiya T, Kobayashi Y, Shiozaki I (2002) Water-weakened lower crust and its role in the concentrated deformation in the Japanese Islands. Earth Planet Sci Lett 203(1):245–253
Iio Y, Sagiya T, Kobayashi Y (2004) Origin of the concentrated deformation zone in the japanese islands and stress accumulation process of intraplate earthquakes. Earth Planets Space 56(8):831–842
Karato S (2012) Deformation of earth materials. An introduction to the rheology of solid earth, Chap 212. Cambridge University Press, Cambridge
Lachenbruch AH, Sass J (1980) Heat flow and energetics of the San Andreas Fault Zone. J Geophys Res Solid Earth (1978–2012) 85(B11):6185–6222
Leloup PH, Ricard Y, Battaglia J, Lacassin R (1999) Shear heating in continental strike-slip shear zones: model and field examples. Geophys J Int 136(1):19–40
Little T, Holcombe R, Ilg B (2002) Kinematics of oblique collision and ramping inferred from microstructures and strain in middle crustal rocks, central Southern Alps, New Zealand. J Struct Geol 24(1):219–239
Markl G (1998) The Eidsfjord anorthosite, Vesterålen, Norway: field observations and geochemical data. Norges Geologiske Undersokelse 434:53–76
Montési LG, Hirth G (2003) Grain size evolution and the rheology of ductile shear zones: from laboratory experiments to postseismic creep. Earth Planet Sci Lett 211(1):97–110
Moore JD, Parsons B (2015) Scaling of viscous shear zones with depth-dependent viscosity and power-law stress–strain-rate dependence. Geophys J Int 202(1):242–260
Nakajima J, Hasegawa A (2007) Deep crustal structure along the Niigata–Kobe Tectonic Zone, Japan: its origin and segmentation. Earth Planets Space 59(2):e5
Nakajima J, Kato A, Iwasaki T, Ohmi S, Okada T, Takeda T et al (2010) Deep crustal structure around the Atotsugawa fault system, central Japan: a weak zone below the seismogenic zone and its role in earthquake generation. Earth Planets Space 62(7):555–566
Ogawa Y, Honkura Y (2004) Mid-crustal electrical conductors and their correlations to seismicity and deformation at Itoigawa–Shizuoka Tectonic Line, central Japan. Earth Planets Space 56(12):1285–1291
Ohzono M, Sagiya T, Hirahara K, Hashimoto M, Takeuchi A, Hoso Y, Wada Y, Onoue K, Ohya F, Doke R (2011) Strain accumulation process around the Atotsugawa fault system in the Niigata–Kobe Tectonic Zone, central Japan. Geophys J Int 184(3):977–990
Okudaira T, Jeřábek P, Stünitz H, Fusseis F (2015) High-temperature fracturing and subsequent grain-size-sensitive creep in lower crustal gabbros: Evidence for coseismic loading followed by creep during decaying stress in the lower crust? J Geophys Res Solid Earth 120(5):3119–3141
Okudaira T, Shigematsu N, Harigane Y, Yoshida K (2017) Grain size reduction due to fracturing and subsequent grain-size-sensitive creep in a lower crustal shear zone in the presence of a Co2-bearing fluid. J Struct Geol 95:171–187. doi:10.1016/j.jsg.2016.11.001
Pollitz FF (2001) Viscoelastic shear zone model of a strike-slip earthquake cycle. J Geophys Res 106(26):526–541
Rohrer GS (2010) Introduction to grains, phases, and interfacesan interpretation of microstructure. Trans Aime, 1948, vol 175, pp 15–51, by cs smith. Metall Mater Trans A 41(5):1063–1100
Rutter E (1999) On the relationship between the formation of shear zones and the form of the flow law for rocks undergoing dynamic recrystallization. Tectonophysics 303(1):147–158
Rybacki E, Gottschalk M, Wirth R, Dresen G (2006) Influence of water fugacity and activation volume on the flow properties of fine-grained anorthite aggregates. J Geophys Res Solid Earth 111(B3):B03203
Savage J, Burford R (1973) Geodetic determination of relative plate motion in central California. J Geophys Res 78(5):832–845
Shimada K, Tanaka H, Toyoshima T, Obara T, Niizato T (2004) Occurrence of mylonite zones and pseudotachylyte veins around the base of the upper crust: An example from the southern Hidaka metamorphic belt, Samani area, Hokkaido, Japan. Earth Planets Space 56(12):1217–1223
Takahashi Y (2015) Geotectonic evolution of the Nihonkoku Mylonite Zone of north central Japan based on geology, geochemistry, and radiometric ages of the Nihonkoku Mylonites. In: Mukherjee S, Mulchrone KF (eds) Ductile shear zones: from micro- to macro-scales. Wiley, Chichester, UK
Takeuchi CS, Fialko Y (2012) Dynamic models of interseismic deformation and stress transfer from plate motion to continental transform faults. J Geophys Res Solid Earth 117(B5):B05403
Takeuchi CS, Fialko Y (2013) On the effects of thermally weakened ductile shear zones on postseismic deformation. J Geophys Res Solid Earth 118(12):6295–6310
Tanaka A, Yamano M, Yano Y, Sasada M (2004) Geothermal gradient and heat flow data in and around Japan (i): Appraisal of heat flow from geothermal gradient data. Earth Planets Space 56(12):1191–1194
Thatcher W, England PC (1998) Ductile shear zones beneath strike-slip faults: implications for the thermomechanics of the San Andreas Fault Zone. J Geophys Res Solid Earth 103(B1):891–905
Tullis J, Yund RA (1982) Grain growth kinetics of quartz and calcite aggregates. J Geol 90:301–318
White S (1979) Grain and sub-grain size variations across a mylonite zone. Contrib Mineral Petrol 70(2):193–202
White S, Burrows S, Carreras J, Shaw N, Humphreys F (1980) On mylonites in ductile shear zones. J Struct Geol 2(1–2):175–187
Wittlinger G, Tapponnier P, Poupinet G, Mei J, Danian S, Herquel G, Masson F (1998) Tomographic evidence for localized lithospheric shear along the Altyn Tagh fault. Science 282(5386):74–76
Yoshimura R, Oshiman N, Uyeshima M, Toh H, Uto T, Kanezaki H, Mochido Y, Aizawa K, Ogawa Y, Nishitani T et al (2009) Magnetotelluric transect across the Niigata-Kobe tectonic zone, central Japan: a clear correlation between strain accumulation and resistivity structure. Geophys Res Lett 36(20):L20311
Yuen D, Fleitout L, Schubert G, Froidevaux C (1978) Shear deformation zones along major transform faults and subducting slabs. Geophys J Int 54(1):93–119
XZ constructed the numerical model for the study, conducted all numerical experiments and drafted the manuscript. TS conceived of the study, participated in its design and coordination and helped to draft the manuscript. Both authors read and approved the final manuscript.
Graduate School of Environmental Studies, Nagoya University, Nagoya, Japan
Xuelei Zhang
Disaster Mitigation Research Center, Nagoya University, Nagoya, Japan
Takeshi Sagiya
Search for Xuelei Zhang in:
Search for Takeshi Sagiya in:
Correspondence to Xuelei Zhang.
The authors would like to thank T. Ito and R. Sasajima for providing kind supervision, helpful comments, and continued support. Constructive reviews by J. Muto, T. Okudaira, and an anonymous reviewer improved the manuscript. This study was supported by JSPS KAKENHI Grant Number 261090003. The corresponding author was supported by a Japanese Government Scholarship for his study in Japan.
Zhang, X., Sagiya, T. Shear strain concentration mechanism in the lower crust below an intraplate strike-slip fault based on rheological laws of rocks. Earth Planets Space 69, 82 (2017) doi:10.1186/s40623-017-0668-5
Intraplate strike-slip fault
2-D thermal-mechanical fault model
Ductile shear zone
6. Geodesy
Crustal Dynamics: Unified Understanding of Geodynamics Processes at Different Time and Length Scales | CommonCrawl |
Ionized Depletion Region, Why aren't those charged being excited?
Ok so I understand the PN junction, and how when 2 Semiconductor materials are placed together the Electrons will jump into the Holes near the junction creating a Negatively Ionized Atoms on the P-Side (Near the Junction) and Positively Charged Atoms near the Junction on the N-Side.
HOWEVER, the Donor Atoms Give out Electrons at room temperature and the Acceptor Atoms move around holes at room temperature.
How come these Ionized Atoms aren't Displacing Electrons to create more holes (In the Ionized P-Side) or Accepting more electrons (In the Positively Ionized Region on the N-Side)?
I understand That the Electric Field is causing some resistance....but Regardless at room temperature why aren't the Electrons getting excited out of the Negatively Ionized Atoms on the P-Side near the Junction....Why are they now "fixed" to the lattice? Like why aren't these Atoms near the Junction allowing Electrons to Get Excited and move at room temperature (Like the Electrons that have filled the holes on the P-Side)?
Every Book/Semiconductor Physics book i've read talks about how the Ionized Atoms are fixed to the lattice (Makes sense)....but it doesn't say why THEY themselves aren't changing due to thermal excitations at room temperature.
edit: to rephrase:Why arent the acceptor atoms releasing the electrons they obtained from the N-Biased junction at room temp? Because I know That the electric field at the depletion region is caused by electrons filling holes in the P-region and Lack of Electrons in the N-Region.....im asking whats stopping those acceptor atoms near the junction in the P-region from "releasing electrons (that they just gained)" or whats stopping the donor atoms from being filled? (the atoms that are producing an electric field). I understand the electric field opposes more electrons from filling the n-side, but whats stopping the p-side negative ions from being excited again?
I guess im confused how the electric fields are stable...when room temperature is what excited the electrons in the first place.
electricity electrostatics semiconductor-physics
$\begingroup$ What do you mean by "...the Acceptor Atoms move around holes at room temperature."? All atoms in the system are fixed; that's why it's a lattice. Only electrons can move around. Did you mean to describe the motion of an electron "jumping" from one hole site to the other? Moreover I don't understand what you mean by "How come these Ionized Atoms aren't Displacing Electrons to create more holes..."? Can you please elaborate more on that? I understand the paragraphs after that. But I would like some more info before answering. $\endgroup$
– NanoPhys
$\begingroup$ @NanoPhys Made a quick edit above^^ $\endgroup$
$\begingroup$ Is this a fair rewording of your questions: "In the depletion region dopants must be ionised. That means that donors have lost an electron and acceptors have trapped a electron. But why does the trapped electron stay with the acceptor atom?" $\endgroup$
– boyfarrell
$\begingroup$ @boyfarrell thats a fair rewording. Because im mainly concerned with why at room temp electrons are being ionized....but still being trapped with the acceptor atoms. But I also suppose I wonder why electrons aren't filling the donor atoms either (but Im assuming thats because of electric field....which makes sense) $\endgroup$
$\begingroup$ OK. I'm working on an answer. $\endgroup$
It is only possible for the donor and acceptor atoms to de-ionise in the depletion region if they capture a free carrier (electron and hole, respectively). But there are no free carriers in the depletion region because they have all be swept out by the strong electric field (something like 30$-$40kV/cm$^{\textrm{-1}}$!).
So why then do the electron from the n-side stay with the acceptor atoms on the p-side once the junction has formed?
The short answer is because the carrier trapped by the dopant atoms would have to gain almost a bandgaps worth of energy to de-ionize.
The longer answer. Let's assume an acceptor atom in the p-side depletion region de-ionises by giving up it's captured electron. What happens? The electron is pushed back to the n-side by the field. However, the system is now no longer in equilibrium because the p-side is charged to +1 and the n-side is charged to -1. This is not stable! You can see that if you run this forward in time, eventually an electron from the n-side will have to neutralise the acceptor, bringing the material back to charge neutrality.
When you solve the Poisson equation for the pn-junction this is what you are solving for: the equilibrium distribution for charge neutrality. There are probably carrier dynamics like de-ionisation happening but they only serve to push the system out of equilibrium temporarily, eventually equilibrium will always be restored.
boyfarrellboyfarrell
$\begingroup$ I don't think it matters what the rate is. No matter what the perturbing rates are in a system a stable equilibrium state will eventually be reached. I think it's best to view this problem from a statistical perspective rather than from considering the microscopic processes. The depletion region is stable once formed. $\endgroup$
$\begingroup$ I think the rate will be quite low because the electron (from the p-side acceptor) would have to acquire enough thermal energy to reach the conduction band of the material (about a ~1eV jump). The thermal energy at room temperature is much lower ~25meV, so I imagine that once acceptor captures the free electron (from the n-side) it will be quite stable. $\endgroup$
$\begingroup$ Yes, but the opposite is happening on the n-side. The n-side is ionised and wants to capture an electron from the valence band, therefore the energy gap is the same ~1eV. $\endgroup$
$\begingroup$ Right! You can think of that way. On the n-side to ionize a donor the electron only needs several ~kT, however on the de-ionise an acceptor (release an electron to the conduction band) on the p-side the electron needs about ~40kT. The same thing happens when you consider it from the "hole perspective". $\endgroup$
$\begingroup$ Yay! You got it! My pleasure, every time I am forced to think about the pn-junction I learn something new, so thank you for your question. The pn-junction is actually surprising complex piece of physics (in concept and computationally). Keep on asking question if you get stuck. $\endgroup$
The Boltzmann distribution is a basic distribution as a function of temperature. It states that the probability of finding particles at a given energy decreases as the energy increases, but that at higher temperatures, higher energies are more likely to be found. For a single particle species, the Boltzmann distribution looks like \begin{equation} P(E)\propto e^{\frac{-E}{k_B T}}, \end{equation} where $k_B$ is the Boltzmann constant and $T$ is the system temperature. Generally, this means that the higher the temperature, the more likely you are to find fast moving particles.
If a charged particle is in an electric field, then that field affects makes a contribution to the energy, in addition to the purely kinetic energy it has ($m v^2/2$). Written this way (and ignoring factors of 1/2, etc...),
\begin{equation} P(E)\propto e^{\frac{-(m v^2 - q \phi)}{k_B T}}, \end{equation}
This says that, at higher temperatures, charges are likely to be moving faster, or be in higher potential regions, or a little of both. Charges cross the P-N junction through simple diffusion, which is related to their velocity $v$. The more charges that move across this junction, the higher $\phi$, the electric potential (a measure of the electric field) becomes. This means that as more charges migrate and establish a large potential, the less velocity particles in those regions tend to have, for a fixed temperature $T$. So while there is still significant temperature in the region, it is no longer manifest purely as kinetic energy. With diminished kinetic energy, diffusion takes place more slowly, and so eventually, charges no longer cross the P-N junction (this is when the junction is in equilibrium).
Alternatively, you can look at it like this: charges cross the P-N junction, creating an electric field across the junction. However, the establishment of this field impedes the crossing of more charges. At equilibrium, the drift velocity of charges due to the electric field exactly counteracts the diffusion drift velocity. So, your hot electrons are bouncing out and trying to cross over, but the accumulated charge opposes the crossing, and your electron settles back where it came from.
KDNKDN
PN Junction Depletion Region
Depletion region (PN Junction) question
PN junction diode
Physics behind depletion region?
How do holes move across a PN junction?
Regarding diffusion in p-n Junction | CommonCrawl |
Do we know anything about the age of the universe?
I am looking to understand how the age of the universe is calculated according to modern physics.
My understanding is very vague as the resources I have found do not seem to state consistently whether inflation is part of the standard model.
For example, starting at the Age of the Universe wikipedia page, the age is calculated precisely within +/- 21 million years according to the Lambda-CDM model.
It is frequently referred to as the standard model...
The ΛCDM model can be extended by adding cosmological inflation, quintessence and other elements that are current areas of speculation and research in cosmology.
Then I read:
The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, $ \Omega _{\Lambda }$, is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia Supernovae7 or 0.6847 ± 0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass-energy density of the universe.8
So this is where the numbers come from. The Dark Energy Survey page on wikipedia states:
The standard model of cosmology assumes that quantum fluctuations of the density field of the various components that were present when our universe was very young were enhanced through a very rapid expansion called inflation.
which appears to contradict what was said about the standard model on the Age of the Universe page.
From there I read about supernovae and standard candles.
All these pages list so many theories and problems, it seems hard to me to say what we know for certain. i.e. something that no physicist would disagree with.
I am looking to understand what I have misunderstood here or whether this is a fair characterization:
It seems a very simple calculation from the Hubble constant gave us a number for the age of the universe. But since the 1960's it's been known that the universe is "flat" as accurately as we can measure i.e. $ \Omega = 1 $, and though this falsifies the hypothesis (of Hubble's law), we've kept the age to hang physical theories off, but in a way that can no longer be justified from first principles and observations.
Surely we have made observations, and there are things we can infer from them. And my question is:
Is the age of the universe something we can infer from our observations without appealing to an empirically inconsistent model? And if so, how? And how do we get the numbers out of the equations?
general-relativity cosmology time space-expansion big-bang
$\begingroup$ This is relevant: youtube.com/watch?v=Y6Vhh70Lw9w $\endgroup$
– lvella
$\begingroup$ I've deleted a number of obsolete or off-topic comments and/or responses to them. $\endgroup$
$\begingroup$ @DavidZ The comments were highly relevant. Are you some kind of gatekeeper? $\endgroup$
$\begingroup$ @David If a comment is not requesting clarification from the OP or offering suggestions to the post then they really shouldn't have been made in the first place. Any comment that does not fall into these two categories is in fact off-topic / obsolete. Any useful and relevant information should be put into an answer. (Even this comment is not really a god comment, but I figured you should have an explanation). $\endgroup$
– BioPhysicist
The rough idea is that under the assumptions contained in the cosmological principle, the application of Einstein's equations leads us to the equation $$d(t) = a(t) \chi$$ where $d(t)$ is called the proper distance and $\chi$ is called the comoving distance between two points in space. $a(t)$ is the time-dependent scale factor, which is by convention set to $1$ at the present cosmological time.
The rate at which this proper distance increases (assuming no change in the comoving distance $\chi$) is then
$$d'(t) = a'(t) \chi$$
The observation that distant galaxies are receding, and that the recession velocity is proportional to the observed proper distance with proportionality constant $H_0$ (Hubble's constant) tells us that $a'(0) = H_0$. If we assume that $a'(t)$ is constant, then $$d(t) = (1+H_0 t) \chi$$ and that when $t=-\frac{1}{H_0}$, the proper distance between any two points in space would be zero, i.e. the scale factor would vanish. This leads us to a naive estimate of the age of the universe, $T = \frac{1}{H_0} \approx 14$ billion years.
Of course, there is no particular reason to think that $a'(t)$ should be constant. The dynamics of the scale factor are determined by the distribution of matter and radiation in the universe, and on its overall spatial curvature. For example, if we assume that the universe is spatially flat and consists of dust and nothing else, then we find that
$$a(t) = (1+\frac{3}{2}H_0 t)^{2/3}$$ where $H_0$ is the current-day Hubble constant and $t$ is again measured from the present. In such a universe, the scale factor would vanish when $t = -\frac{2}{3}\frac{1}{H_0}$, so the age of the universe would be 2/3 the naive estimate. More generally, if we model the contents of the universe as a fluid having a density/pressure equation of state $p = wc^2\rho$ for some number $w$, then we would find
$$a(t) = \left(1 + \frac{3(w+1)}{2}H_0 t\right)^\frac{2}{3(w+1)}$$ leading to respective ages $$T = \frac{2}{3(w+1)}\frac{1}{H_0}$$
The $\Lambda_{CDM}$ model assumes that the universe can be appropriately modeled as a non-interacting combination of dust and cold dark matter $(w=0)$, electromagnetic radiation $(w=1/3)$, and dark energy, and having an overall spatial curvature $k$. The Friedmann equation can be put in the form
$$\frac{\dot a}{a} = \sqrt{(\Omega_{c}+\Omega_b)a^{-3} + \Omega_{EM}a^{-4} + \Omega_ka^{-2} + \Omega_\Lambda a^{-3(1+w)}}$$
where $w$ is the equation of state parameter for the dark energy/cosmological constant and the $\Omega$'s are parameters which encapsulate the relative contributions of cold dark matter, baryonic (normal) matter, electromagnetic radiation, spatial curvature, and dark matter, respectively. By definition, $\sum_i \Omega_i = 1$. Note that if we set all the $\Omega$'s to zero except for $\Omega_b=1$, we recover the solution for dust from before.
The electromagnetic contribution is small in the present day, so neglecting it is reasonable as long as $\Omega_{EM}a^{-4}\ll \Omega_ma^{-3} \implies a\gg \Omega_{EM}/\Omega_m$. If additionally the universe is spatially flat so $\Omega_k=0$ (as per the Planck measurements) and $w=-1$ (consistent with dark energy being attributable to a cosmological constant), then this is reduced to
$$\frac{\dot a}{a} = \sqrt{(\Omega_{c}+\Omega_{b})a^{-3}+\Omega_\Lambda}$$ This can be solved analytically to yield
$$a(t) = \left(\frac{\Omega_c+\Omega_b}{\Omega_\Lambda}\right)^{1/3} \sinh^{2/3}\left(\frac{t}{T}\right)$$
where $T \equiv \frac{2}{3H_0\sqrt{\Omega_\Lambda}}$ and now $t$ is measured from the beginning of the universe. Setting this equal to 1 allows us to solve for the time to the present day.
The Planck satellite measured $\Omega_b=0.0486,\Omega_c=0.2589,$ and $\Omega_\Lambda=0.6911$ (they don't add up to 1 because we've neglected $\Omega_{EM}$ and $\Omega_k$). The result is an age of the universe
$$t =T\sinh^{-1}\left(\left[\frac{\Omega_\Lambda}{\Omega_c+\Omega_b}\right]^{1/2}\right) = \frac{2}{3H_0\sqrt{\Omega_\Lambda}}(1.194) \approx 13.84\text{ billion years}$$
The actual calculation is more careful, but this is the general idea.
J. MurrayJ. Murray
$\begingroup$ The original question may be confusing standard $\Lambda$CDM cosmological expansion with "inflation". You may want to clarify that early universe inflation lasts for $\sim 10^{-32}$ sec, which is why it is neglected in these calculations. $\endgroup$
– Paul T.
$\begingroup$ That's not why inflation is neglected: It could have lasted for an arbitrarily long time. It's neglected because the hot Big Bang is identified with the end of inflation. $\endgroup$
– bapowell
$\begingroup$ @David Is that a question? $\endgroup$
– J. Murray
$\begingroup$ @David I'm afraid if you're looking for a model which does not involve assumptions, physics (and empirical science in general) is not the right game for you. Every physical model contains assumptions - all you can do is understand what assumptions you are making, and see whether experimental measurement validates them as reasonable. $\endgroup$
$\begingroup$ @David I did read your question, and I think that is an unrealistic expectation to have about something as complex as the age of the universe. There are observations, like the examples you give - and then there are interpretations of those observations in the context of models and theories. Since we cannot observe the beginning of the universe, we must make models and observe how well they work. No model is above criticism, and none is (or should be) assumed to be correct by everyone. $\endgroup$
I'm not too interested in providing an answer from the cosmological point of view. It is clear that the age of the universe derived in that way is model-dependent. The age thus obtained depends on certain assumptions (e.g. that the dark energy density remains constant).
I will just add a couple of additional age determination methods that rely on alternative "non-cosmological" methods, that provide at least some verification that the answers from cosmology are in the right ballpark.
Stellar evolution calculations rely on very solid, non-controversial physics. These predict that stars spend most of their lives burning hydrogen in their cores before evolving away from the main sequence. By comparing the predictions of these models with the luminosity, temperature, surface gravity and chemical composition of stars, we can estimate their age; particularly those that have begun their evolution away from the main sequence. If we look around the solar neighborhood, we see a variety of stars with different ages. The oldest stars appear to be the ones with the most metal-poor composition and they have ages of around 12-13 billion years. The universe must be at least this old.
When stars "die" the lowest mass objects will end their lives as white dwarfs. These cinders of carbon and oxygen are supported by electron degeneracy, release no internal energy and cool radiatively. The coolest, lowest luminosity white dwarfs we can see will be the ones that have been cooling longest. The cooling physics is relatively simple - if the lowest luminosity white dwarfs have temperatures of around 3000K and are a millionth of a solar luminosity, then one works out a cooling age of around 11-12 billion years. The progenitors of these objects will have had their own short lifetimes, so estimate #2 is consistent with estimate #1 and provides a minimum age for the universe.
At the moment, our observations of high redshift galaxies suggest that galaxy formation and the formation of the first stars occured relatively quickly after the universe was very small and hot. The first galaxies and stars were assembled at redshifts of at least 6. This in turn suggests that the pre-stellar "dark ages" were comparatively short. The age of the universe at a redshift of 6 is much less dependent on cosmological assumptions and parameters, but in any case is a small fraction ($<10$%) of the age of the universe now (e.g. in the concordance LCDM model, $z=6$ is only 0.94 billion years post big-bang, but this only changes to 0.86 billion if there is no dark energy) . Thus we can be reasonably sure that the age of the universe (or at least the time since the universe was very small and very hot) is perhaps only a billion or less years older than the oldest stars we can see.
You can mess about with cosmological parameters (and their time dependence) a bit to alter these results. But you can't make the universe much younger without conflicting with the evidence from old stars and white dwarfs. You also can't make it much older whilst simultaneously accounting for the lack of older stars in our own and other galaxies, the cosmic microwave background (and its temperature), the abundance of helium and deuterium in the universe or the rate of evolution of cosmic structure. I think most scientists would agree that the $\pm 21$ million year error bar implicitly assumes the LCDM model is correct (age calculated as per some of the other answers). The true error bar could be a factor of 10 higher, given the current debate about differences in $H_0$ derived from the CMB as opposed to the local universe, but probably not a factor of 100. Even a naive extrapolation back in time of the currently observed expansion rate gives an age of around 14 billion years.
It is also possible to avoid a singular big bang in the past altogether, by having the current phase of our universe's expansion beginning at the end of a previous contraction phase (a.k.a. the big bounce). In which case, the "real" age of the universe can be anything you like, with 13.8 billion years just being the time since the latest bounce.
ProfRobProfRob
$\begingroup$ "At the moment our observations of high redshift galaxies suggest that galaxy formation and the formation of the first stars was a rapid process." - Does this conclusion rely on the Friedmann metric? In other metrics the relation between the redshift, distance, and time may be dramatically different. $\endgroup$
– safesphere
$\begingroup$ @safesphere High redshift means most of the way back to the big bang, which is how the "age of the universe" is defined in the question. Yet again you don't provide an answer... $\endgroup$
– ProfRob
$\begingroup$ How go we know "the formation of the first stars was a rapid process"? How is the formation time estimated? $\endgroup$
$\begingroup$ "The age of the universe at a redshift of 6 is much less dependent on cosmological assumptions, but in any case is a small fraction of the age of the universe now." @safesphere. If you know different then write an answer (and an answer that makes clear why the universe at $z=6$ looks totally different to the universe today and accounts for the CMB). $\endgroup$
$\begingroup$ @Edouard you will see that my answer incorporates a discussion of cosmic bounces. The age of 13.8 billion years is only ever claimed to be the age after the most recent inflationary episode. $\endgroup$
To compute the age of the universe, one must solve the equation: $$\frac{1}{a}\frac{da}{dt} = H_0 \sqrt{\frac{\Omega_{\gamma,0}}{a^4}+\frac{\Omega_{m,0}}{a^3}+\frac{\Omega_{k,0}}{a} +\Omega_{\Lambda,0}}$$ where $\Omega_\gamma$, $\Omega_m$, $\Omega_k$, $\Omega_\Lambda$ are the densities of radiation, matter, curvature, and vacuum energy, and the subscript '0' denotes present-day quantities. This expression comes directly from the Friedmann equation relating the Hubble parameter, $H=\dot{a}/a$, to the density, $\rho$, $$H^2 = \frac{8\pi}{3m_{\rm Pl}^2}\rho.$$ The density parameter $\Omega$ is simply $\Omega = \rho/\rho_c = \frac{8\pi}{3m_{\rm Pl}^2H^2}$, where $\rho_c$ is the critical density.
Now, to solve this equation we simply need values for these density parameters of the individual components. If we're going for an approximation, we can set $\Omega_{\gamma,0} \approx \Omega_{k,0} \approx 0$ and solve the resulting integral for $t$, $$ t = \frac{1}{H_0}\int_0^a \frac{da'}{a'\sqrt{\Omega_{m,0}/a'^3 + \Omega_{\Lambda,0}}}=\frac{1}{H_0}\int_0^a\frac{\sqrt{a'}da'}{\sqrt{\Omega_{m,0}+\Omega_{\Lambda,0}a'^3}}.$$ This can be solved analytically by taking $x=a^{3/2}$, giving $$t = \frac{2}{3H_0\sqrt{1-\Omega_{m,0}}}\arcsin\left(\sqrt{\frac{1-\Omega_{m,0}}{\Omega_{m,0}}}a^{3/2}\right).$$
To get the age of the universe, insert $a=1$ and the most up-to-date value of $ \Omega_{m,0}$.
One comment: inflation is not relevant here because we are starting the integration after inflation ends. Inflation could have lasted for an arbitrarily long time, and the standard hot big bang is effectively taken to correspond to the end of inflation in our causal patch.
bapowellbapowell
This is not a full answer, but I think it will help if you separate out inflation from the rest of the picture. The age of the universe can be estimated in the first instance as the time elapsed since some very early epoch where the temperature was low enough that the Standard Model of particle physics applies to reasonable approximation. This means you can leave out the very early processes which remain very unknown in any case.
With this approach one then applies general relativity and standard physics to construct a model of the main components of the universe, and one can estimate the evolution with reasonable confidence; see a textbook for details. This is how the age is estimated.
Andrew SteaneAndrew Steane
$\begingroup$ Well this leads to perhaps a simpler statement of my issue. We inferred inflation from red-shift giving the universe and age. We didn't observe inflation. Then leaving out inflation how do we get the age of the universe? $\endgroup$
$\begingroup$ Which "inflation" do you mean---the very early conjectured one, or the accelerated expansion that is thought to be ongoing now and is somewhat better evidenced? If the latter then we don't leave it out but include it, for example as a cosmological constant, and get its size by best fit to a number of strands of evidence. $\endgroup$
– Andrew Steane
$\begingroup$ Because it refers to temperature (which is phenomenally unlikely to be uniform everywhere, even in a universe not including causally-separated localities), I believe this answer is the most compatible with my suggestion (not accepted by the edit reviewers) that the title question be modified to change "the universe" to something like "our local universe", which would imply a multiverse and inflation (to which the slower multiple processes of the original Big Bang Theory would still provide an alternative). I've also upvoted 1 or 2 other answers seeming plausible, as is permitted on this site. $\endgroup$
– Edouard
What's called the "age of the universe" would more accurately be called the age of the most recent epoch in the universe's history. That epoch began with the end of inflation, or with the end of whatever noninflationary process created the incredibly uniform expanding quark-gluon plasma that eventually clumped into stars, planets and us. We don't know the age of everything that exists, and probably never will, but we know the age of the expanding cosmos that we live in.
The best current model of that cosmos (the simplest model that fits all the data) is called Lambda-CDM (ΛCDM). ΛCDM has a singularity of infinite density called the "big bang singularity", and times measured from that singularity are called times "after the big bang" (ABB). Our current location in spacetime is about 13.8 billion years ABB, and that's called the "age of the universe".
But no one believes that the singularity in ΛCDM has any physical significance. To get a correct model of the universe, you need to remove the singularity and some short time interval after it from the model, and graft some other model onto it.
The most popular candidates for models of the previous epoch are based on cosmic inflation. They fit all of the available data, but the amount of still-visible information about the universe of 13.8 billion years ago is small enough that we can't draw any definite conclusions. That's where things stand today.
(There's a disturbing possibility that that's where things will stand forever, because according to ΛCDM and semiclassical quantum mechanics, the total amount of information that we will ever be able to collect about the early universe is finite, and it may not be large enough to pin down the right model. Even the information that enabled us to pin down the parameters of ΛCDM will be inaccessible to far-future civilizations, according to ΛCDM.)
By this terminology, inflation ends a tiny fraction of a second ABB, and this has given rise to a common misconception that inflation only lasts a tiny fraction of a second. Actually, depending on the model, the inflationary epoch can last for essentially any amount of time, and whatever preceded it could have lasted for any amount of time, if time even has meaning at that point. None of this is counted in ABB times.
ABB times do include a fraction of a second that is literally meaningless, since it's from the early part of ΛCDM that we remove as unrealistic, but we can't calculate any ABB time to nearly that accuracy, so it doesn't really matter.
benrgbenrg
$\begingroup$ I upvoted your answer, but I believe "the singularity" is sometimes considered (probably even by some small minority of physicists) to represent a boundary between physics (whose conclusions must be either observed or experimentally proven, or potentially falsifiable by observation or experiment) and philosophy or religion. $\endgroup$
$\begingroup$ Nikodem J. Poplawski has preprinted many papers on Arxiv between 2010 and 2020 that provide a conceivably-falsifiable explanation of why the singularity's infinite density has not been distributed into any potential observability, altho he uses Einstein-Cartan Theory, and a 2018 review of some plausible relations between ECT & cosmology, by Boehmer and visible at arxiv.org/pdf/1709.07749.pdf , does not mention Poplawski in its dozens of references. $\endgroup$
$\begingroup$ "But no one believes that the singularity in ΛCDM has any physical significance. To get a correct model of the universe, you need to remove the singularity and some short time interval after it from the model, and graft some other model onto it." is a very revealing statement. $\endgroup$
$\Lambda CDM$'s claim of the Universe being 13.8B years old should be taken with a grain of salt.
The Universe (as depicted by $\Lambda CDM$) has hypothetically underwent inflation only for a fraction of a second shortly after the Bang, negligible when compared with its current age. Therefore, you shouldn't be hung up on inflation when it comes to guessing its age, albeit inflation has allegedly left some permanent mark on $\Lambda CDM$, such as being nearly flat ($\Omega_k=0$) as you noticed.
That being said, you should be alarmed by $\Lambda CDM$'s inconsistent stories about the Universe's late-time history at low redshift (long after inflation was gone), evidenced by the contradicting numbers of Hubble constant ($H_0$) measurements (the "Hubble tension" is all over the place), which could have real implications on the uncertainty of dark energy density ($\Omega_\Lambda$) and the true age of the Universe.
The standard cosmology model $\Lambda CDM$ has been known as the "concordance model". Given the "Hubble tension" and other inconsistencies (check out the controversy surrounding $\sigma_8$), "discordance model" might be a more suitable name for $\Lambda CDM$.
Hence $\Lambda CDM$'s calculation of the Universe being 13.8B years young should not be taken too seriously, at least one should put a much higher error margin on the number.
MadMaxMadMax
Not the answer you're looking for? Browse other questions tagged general-relativity cosmology time space-expansion big-bang or ask your own question.
Lookback Time & Age of the Universe Calculations
How has the age of the Universe been derived from the observations made by the Planck mission?
How did we find the composition of the universe when our observation of it is limited?
Flat universe and changing energy/matter ratios
Why does the current age of the universe increase when dark energy is included? (open universe)
How do they know the numbers of the energy pie chart of the universe?
Why is the Planck/WMAP estimate of the age of the universe preferred? | CommonCrawl |
A network based approach to drug repositioning identifies plausible candidates for breast cancer and prostate cancer
Hsiao-Rong Chen1,2,
David H. Sherr3,
Zhenjun Hu1 &
Charles DeLisi1,4
The high cost and the long time required to bring drugs into commerce is driving efforts to repurpose FDA approved drugs—to find new uses for which they weren't intended, and to thereby reduce the overall cost of commercialization, and shorten the lag between drug discovery and availability. We report on the development, testing and application of a promising new approach to repositioning.
Our approach is based on mining a human functional linkage network for inversely correlated modules of drug and disease gene targets. The method takes account of multiple information sources, including gene mutation, gene expression, and functional connectivity and proximity of within module genes.
The method was used to identify candidates for treating breast and prostate cancer. We found that (i) the recall rate for FDA approved drugs for breast (prostate) cancer is 20/20 (10/11), while the rates for drugs in clinical trials were 131/154 and 82/106; (ii) the ROC/AUC performance substantially exceeds that of comparable methods; (iii) preliminary in vitro studies indicate that 5/5 candidates have therapeutic indices superior to that of Doxorubicin in MCF7 and SUM149 cancer cell lines. We briefly discuss the biological plausibility of the candidates at a molecular level in the context of the biological processes that they mediate.
Our method appears to offer promise for the identification of multi-targeted drug candidates that can correct aberrant cellular functions. In particular the computational performance exceeded that of other CMap-based methods, and in vitro experiments indicate that 5/5 candidates have therapeutic indices superior to that of Doxorubicin in MCF7 and SUM149 cancer cell lines. The approach has the potential to provide a more efficient drug discovery pipeline.
The high cost and the long time required to bring drugs into commerce [1–3] is driving efforts to repurpose FDA approved drugs—to find new uses for which they weren't intended, and to thereby reduce the overall cost of commercialization, and shorten the lag between drug discovery and availability [4]. Among the successes of this approach are sildenafil, originally developed as a cardiovascular drug [5] and repositioned to treat erectile dysfunction; and zidovudine (AZT), originally developed as an anticancer drug [6], and repositioned for the treatment of HIV. These discoveries, though serendipitous, motivated more systematic approaches which might amplify the number of discoveries many-fold.
Systematic approaches generally begin with some form of computer based screening to generate large numbers of plausible candidates [7–11]. Many current computational strategies exploit shared similarities among drugs or diseases and infer similar therapeutic applications or drug selections. Drug similarities include chemical structures [12–14], drug-induced phenotypic side effects [12, 15], molecular activities [16]. Disease similarities include phenotypic similarity constructed by identifying similarity between MeSH terms [17] from OMIM database [18]; semantic phenotypic similarity [12]. The efficacy of the candidates generated by such approaches would not exceed that of existing drugs since the disease biomarkers remain the same.
A more general approach searches for disease (Gene Expression Omnibus, GEO) and drug (CMap) induced transcriptional profiles that are inversely correlated [19–23]. Strong anti correlation between the gene expression profiles of an FDA approved drug and those of a disease for which it was not intended identifies the drug as a candidate for repositioning. This procedure, though useful, is relatively agnostic with respect to the functional relations between profiles (the ordered lists of perturbed genes). A drug identified this way is limited in that it is not informed by cellular function, but simply targets a group of generally non-interacting differentially expressed genes.
The idea underlying our method, which we refer to as the method of functional modules (MFM), is to impose the condition that candidates must affect the same cellular functions in opposite ways, and to use information about DNA as well as RNA. In particular we search for drugs that strongly perturb sets of genes having the following properties: (i) they share a strong functional relationship (ii) they are mutated in the disease state (iii) their expression is highly perturbed by the disease (iv) they are within significantly perturbed pathways of diseases. Functional association is based on position in a human functional linkage network (FLN) [24]—an evidence weighted network that provides a quantitative measure of the degree of functional association among any set of human genes. This means the method integrates multiple sources of evidence such as protein-protein interactions and is not limited to catalogued functional associations, e.g. KEGG, but uses a general approach to find functional modules.
We used genome-wide transcriptional data for more than 3500 compounds provided by LINCS [25] and identified 519 (410) repositioned drug candidates for breast (prostate) cancer. We also compared the accuracy of our method with that of comparable approaches [20, 22] (see Results). We applied CMap datasets and ranked bioactive compounds using different methods, then compared the predictability of the ranked lists of compounds (see Statistical validation). We then presented evidence that a set of disease mutated genes and their nearest FLN neighbors (mutation associated genes (MAGs), see Methods) provided more functional insight than a set of differentially expressed genes in the disease.
In addition to these computational assessments, in vitro viability tests confirmed that 4 our predicted drug candidates were more efficacious than Doxorubicin--an FDA-approved drug for breast cancer--against MCF7 and SUM149 cell lines.
The method built non-incrementally on the work of Shigemizu et al. [22]. In particular: (i) we took account of information on mutations (DNA) as opposed to just expression (RNA); and (ii) we took account of functional information by using a so-called FLN [24], as explained below. Specifically, we annotated mutated genes on the FLN [24], and identified and eliminated all genes that 1) are not within a specified distance of a mutated gene (the functional module constraint); 2) have a differential expression below some threshold (the disease condition constraint); 3) are not in pathways that distinguish the cancer/normal phenotype.
An FLN [24] is represented as a network of nodes (genes/proteins) connected by links whose weights are proportional to the likelihood that the connected nodes share common biological functions. We set a threshold on linkage weight so as to exclude approximately 95 % of the neighbors of any given node, leaving clusters of functionally related aberrant genes. We carried out the procedure twice, once starting with mutated genes and their first nearest neighbors, and then with mutated genes and their first and second nearest neighbors.
We considered each drug in turn and identified two FLN landscapes: one defined by genes that are up-regulated by the disease and down regulated by the drugs (Up regulated Cancer gene, Down regulated Bioactive target gene--UCDB) and, the other defined by genes that are down regulated by disease and up regulated by the drug (DCUB). Each landscape was thus an interconnected set of drug and disease perturbed genes. Finally we assigned a score, mutual predictability (discussed below), which measured the connectivity within each landscape, which is roughly speaking the extent to which the drug and disease genes sets are correlated. The greater the relationship, the higher the likelihood that the drug is a viable candidate for repositioning. The methodology is summarized in Fig. 1. The specifics follow.
Analytic workflow. (1) After mapping mutated genes to the FLN, identify the functional neighbors that are up or down regulated (DEG: differentially expressed genes) and within significantly enriched disease pathways (FDR < 0.05). (2) Map the genes that are down or up regulated by drug candidates to the FLN (3) Compute the MP score; i.e. the significance of the functional overlap between the drug and disease perturbed genes (see text). (4) Rank the compounds according to the MP score. (5) Compute the sensitivity and specificity of the ranked list of compounds. (6) Repeat the process with different groups of MAG and DRG (Drug Response Gene) generated by looping over the parameters (m & k). (7) Choose the parameter set that has highest sensitivity and specificity. (8) The drug candidates are chosen form the ranked list generated by the best parameter set. (9) The top ranked drug candidates are chosen for in vitro experimental validation
Well-documented mutated genes were downloaded from the Online Mendelian Inheritance in Man (OMIM) (http://www.ncbi.nlm.nih.gov/omim) [18]. 40 breast cancer and prostate cancer and 69 leukemia well-documented genes were obtained from OMIM (see Additional file 1). FLN was downloaded from http://visant.bu.edu/misi/fln/.
Transcript levels
The differentially expressed genes were obtained from the Illumina HiSeq 2000 RNA Sequencing platform for 108 breast and 51 prostate paired tumor and normal samples, downloaded from the TCGA portal (http://cancergenome.nih.gov/). Differential expression data in response to leukemia (GSE1159, GSE9476) were obtained from the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) (http://www.ncbi.nlm.nih.gov/geo/). The ranked list of differentially expressed genes was generated using edgeR [26] and a t-statistic.
Ranked list of differentially expressed genes in response to compounds treated in breast cancer (MCF7 cell line), myelogenous leukemia (HL60 cell line), and prostate cancer (PC3 cell line) were obtained from connectivity map (CMap) build 02 [20], https://www.broadinstitute.org/cmap) and LINCS (level 4) (http://www.lincscloud.org/) [20].
Mutation-associated genes (MAG)
The procedure maps to the FLN, known mutated drivers for the disease of interest, and their first nearest neighbors. It then sets the linkage threshold to 0.2, eliminating 95 % of the links and leaving gene clusters each of which is relatively homogeneous functionally. The remaining genes are further selected by 1) setting a threshold on transcription level; 2) filtering out the genes that are not in pathways that distinguish phenotype (i.e. cancer from normal--see Pathway enrichment analysis). As indicated below we were left with relatively small gene sets at the end of the process. In order to identify well-correlated drug-disease gene sets, the definitions of up- and down-regulated genes were not tightly constrained. In particular, we looped through m sets of various sizes, ranging from the 1000 most up-regulated genes, to the top half of the total number of genes in our universe--which depends on the number of probes on the chip--in increments of 2,000. A similar procedure was followed to obtain networks of the most down-regulated genes.
Networks were obtained for each member of our universe of bioactive compounds. A drug was ranked in accord with the intersection between its functional network and the disease functional network, as described below. The procedure was then repeated, by starting with first and second nearest neighbors. The final number of MAG ranged from 75 to 1074 for breast cancer; 15 to 460 for prostate cancer; and 46 to 772 for leukemia.
Pathway enrichment analysis
We focused on the enrichment of pathways abnormally perturbed in the disease state compared to the normal state. PWEA [27] (http://zlab.bu.edu/PWEA/download.php) was used to identify significantly perturbed pathways in the gene expression profiles of breast cancer, leukemia and prostate cancer described above.
Drug response genes (DRG)
The top (up-regulated) and bottom (down-regulated) k most differentially expressed genes in response to bioactive compounds in disease cell lines were selected as DRG. We restricted the number of up (down)-regulated DRG to be within +/− 500 genes of the matched down (up)-regulated MAG. For example, if 500 up-regulated MAG are in an FLN cluster, k would from a low of 100 to a high of 1000 in increments of 100.
Library of Integrated Cellular Signatures (LINCS)
LINCS profiles are generated using 3,678 and 4,228 bioactive compounds for breast cancer and prostate cancer, respectively, each compound typically applied at 6 different concentrations (0.0003-177 μM) and 2 time points (6 and 24 h). We retained the expression profile of a compound that produced maximal mutual predictability score before ranking the compounds. Twenty of the 3678 (11 of 4228) were FDA approved drugs for breast (prostate) cancer.
Connectivity map
We used CMap datasets for comparing the performance between our method with others. CMap profiles are generated using 1251, 1079 and 1182 bioactive compounds for breast cancer, leukemia and prostate cancer, respectively. Eight of the 1251, 6 of 1079, and 7 of 1182 were FDA approved drugs for breast cancer, leukemia and prostate cancer respectively.
Drug and clinical trial information retrieval
We collected data from DrugBank (http://www.drugbank.ca/). FDA approved drugs from FDA service: Drugs@FDA. Clinical trial data were downloaded from https://clinicaltrials.gov.
Mutual predictability (MP)
We used mutual predictability [4] to score the correlation between mutation associated genes (MAG) and drug response genes (DRG). In essence, mutual predictability is a measure of the degree to which MAG can be used as seed genes to predict DRG (predictability M-D), and vice versa (predictability D-M). The mutual predictability of the two sets measures the extent to which genes in one set can be used to identify (predict) genes in the other [24]. A disease drug pair with high mutual predictability has a strong functional relation; the higher the score, the stronger the relation.
To quantify the predictability M-D, we use MAG as seeds, and score and rank each gene connected to a seed using the disease mutual predictability score S i :
$$ {S}_i={\displaystyle \sum_{j\in seeds}}{w}_{ij} $$
where w ij weights the link between gene i and seed j, and the score is 0 if there is no seed connection.
We obtained the sensitivity and specify variation by using a series of cutoffs on the ranked list. The number of true positives is taken to be the number of DRG above a particular cutoff; the number of true negatives is the number of non-DRG below the cutoff; the number of false positives is the number of non-DRG above the cutoff, and the false negatives are the number of DRG below the cutoff. AUC scores range from 0 and 1, with 0.5 and 1.0 indicating random and perfect predictive performance, respectively.
AUCD-M as a measure of predictability D-M is similarly calculated. The mutual predictability between MAG and DRG is then defined as the geometric mean of AUCD-M and AUCM-D:
$$ \mathrm{Mutual}\ \mathrm{Predictability}\ \left(\mathrm{M}\mathrm{A}\mathrm{G}\ \mathrm{and}\ \mathrm{D}\mathrm{R}\mathrm{G}\right) = \sqrt{{\mathrm{AUC}}_{\mathrm{D}-\mathrm{M}} \times {\mathrm{AUC}}_{\mathrm{M}-\mathrm{D}}} $$
Each bioactive compound is thereby ranked by its mutual predictability score.
A detailed example of MP score computation is shown in Additional file 2, 2-1 and Additional file 3 Figure S1.
Evaluation of predictability
Statistical validation
We determined the extent to which FDA approved cancer drugs were enriched in our ranked list by again calculating an AUC as indicated above. Briefly, focus on a position t from the top. The ratio of FDA approved drugs for target disease at or above position t, to total drugs at or above t is counted as TP; the ratio of non-FDA approved drugs below t to total drugs below t is TN. The running index t is varied to produce a ROC, and the area under the curve (AUC) is used as a measure of predictability. This is of course a non-normalized result, but as we now indicate it is used only in a relative way, to compare different parameter sets.
Parameter optimization
Each set of parameters (rank cutoffs m & k for filtering MAG and selecting DRG) generated different ranked lists of bioactive compounds. We computed the AUC score using the ranked list, and chose the best set of parameters based on the maximum AUC score. Repositioned drug candidates were selected from the ranked list generated by the best parameter set. After optimization, the best parameters (number of MAG and DRG (MAG/DRG)) are 237/700 (UCDB) and 75/100 (DCUB) for breast cancer; and 333/100 (UCDB) and 46/100 (DCUB) for prostate cancer.
For the ranked list, the significance of the mutual predictability scores for each compound was estimated by randomly selecting a set of n DRG, computing the mutual predictability score given the MAG, repeating the process 100,000 times to generate a null distribution, and then estimating the probability that our observation was obtained by chance. We computed the false discovery rate (FDR) for individual compounds by calculating the expected number of false positives, given the actual distribution of mutual predictability scores and the null distribution.
We assessed the significance of the best AUC score by randomly selecting from LINCS, 20 out of 3678 drugs for breast cancer and 11 out of 4228 for prostate cancer as true positives. For CMap, we randomly selected 8 out of 1251 drugs for breast cancer; 6 out of 1079 for leukemia; and 7 out of 1182 for prostate cancer. We then computed the AUC for each parameter set, repeated the process 100,000 times and generated a null distribution. The p-value was used to estimate FDR for multiple tests.
Comparison with other methods
We applied the methods (Lamb et al. and Shegemizu et al.) that used CMap data to breast cancer, leukemia and prostate cancer and compared them with MFM.
Lamb et al. [20]
We queried the 50 to 500 (in increments of 50) up- and down-regulated signature genes of breast cancer (MCF7), leukemia (HL60) and prostate cancer (PC3) on (https://www.broadinstitute.org/cmap/newQuery?servletAction=querySetup&queryType=quick), and obtained ranked lists of bioactive compounds. The disease signature genes (FDR < 0.05) were generated from the same expression data used for MFM, as described in Transcript levels. The total number of compounds and the corresponding cell lines were the same as those were used for MFM. Then we followed the same procedure as that was used for MFM to assess the performance. The highest AUC score was selected for comparison.
Shegemizu et al. [22]
We used the same expression profiles (GDS2617, GDS2908 and GDS1439) and parameters (1200 and 1400 for UCDB and DCUB for breast cancer; 700 and 800 for UCDB and DCUB for leukemia; 5200 and 4200 for UCDB and DCUB for prostate cancer) reported in the [22] to generate ranked lists of compounds. Performance was assessed with the same procedure used for MFM.
Cell cultures and reagents
Cell lines MCF7, SUM149 and MCF10A were obtained from ATCC (American Type Culture Collection, Manassas, VA) and maintained as recommended. The growth medium was supplemented with 10 % fetal bovine serum (FBS), 50 units/ml of penicillin and streptomycin, and incubated at 37 °C with 5 % carbon dioxide. Dimethyl sulfoxide (DMSO), at 0.2 %, was used as the vehicle control.
MTT assay
Metabolic activity of MCF7, MCF10A and SUM149 cells treated with vehicle (0.1 % DMSO) or repositioned drug candidates was assessed with the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide) assay. Cells were placed in 96-well plates and treated for 24 h with drugs with concentrations ranging from 0–1000 μM, then assayed for metabolic activity. 10 μl of MTT solution (10 mg/ml in PBS) was added to each well and incubated for an additional 3 h. The medium was then replaced with 200 μl of DMSO. Absorbance was determined at 570 nm (experimental absorbance and 690 nm (background absorbance) by an ELISA plate reader. The inhibitory effect of drug candidates was expressed as the relative metabolic activity (% control) and calculated as shown below. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of vehicle controls - background absorbance of vehicle controls) × 100 %.
We screened repositioned drug candidates by using mutual predictability [24] to score correlation between mutation-associated genes up-regulated in disease samples and genes down-regulated by bioactive compounds (DCUB), and vice versa (UCDB). Since a high mutual predictability score indicates strong functional linkage between sets of disease and drug related genes, our hypothesis is that candidate drugs so identified have potential to correct the sets of disease genes and have therapeutic effect on the disease.
Identification of repositioned drug candidates for breast cancer and prostate cancer using LINCS
We performed analysis on the most updated data of gene expression signatures of bioactive compounds from LINCS [25]. We evaluated the significance of mutual predictability score of each compound, and FDRs as explained under Methods.
Statistics of significant bioactive compounds
LINCS includes breast cancer cell line expression in response to 3678 compounds. We calculated the mutual predictability score for each of these, as described in Method – Mutual Predictability Score. The gene sets associated with each cancer/compound were assigned p-values as described in Method – Parameter optimization, to obtain ranked lists of 2435 DCUB compounds and 1875 UCDB compounds with FDR < 0.05 (Table 1). Of these 510 were FDA approved drug candidates for repositioning to breast cancer. The detailed description of candidates is in Additional file 4.
Table 1 Breast cancer and prostate cancer repositioned drug candidates identified from analysis of LINCS. Complete lists of repositioned drug candidates for breast cancer and prostate cancer are shown in Additional file 13
Table 2 aMutual predicatbility score of breast cancer drug candiates predicted by MFM
LINCS includes prostate cancer cell line expression in response to 4228 compounds. The gene sets associated with each cancer/compound were assigned p-values to obtain ranked lists of 2500 DCUB compounds and 1668 UCDB compounds with FDR < 0.05 (Table 1). Of these 291 were FDA approved drug candidates for repositioning to prostate cancer (Additional file 4).
To evaluate the predictability of the ranked drug candidates, ROC curves were generated using 20 FDA breast cancer drugs and 11 FDA prostate cancer drugs as true positive. The highest AUC scores were 0.86 (p = 1.0E-6) and 0.83 (p = 4.5E-5) for breast cancer and prostate cancer, respectively. We estimated the significance of the AUC scores as described in Parameter optimization session.
Comparisons with computational drug repositioning methods
We compared the predictability of our method with that of the computational drug repositioning methods, which screen drugs based on the anti-correlation between similar gene and disease signatures, omitting the functional correlation between genes. In order to compare the performance with Shegimizu et al. [22], and CMap [20], we obtained the expression data of 1251, 1079 and 1182 compounds treated in MCF7, HL60 and PC3 from CMap data sets. We used methods to generate ranked drug lists and compared the highest AUC scores. As shown in Fig. 2 MFM consistently outperforms the 2 pervious methods, sometimes by wide margins.
Comparison of performance for the MFM with other methods. We applied CMap datasets to compare performance of MFM with Shegemizu et al. and Lamb et al. The sensitivity and specificity were calculated as explained in the Methods section, and the area under the ROC curve was used as a measure of performance. UCDB: prediction of drug candidates that can down-regulate genes up-regulated in cancer. DCUB: prediction of drug candidates that can up-regulate genes down-regulated in cancer. It shows that MFM consistently outperforms the two methods in different datasets and diseases
Recall rate
Among 2587 bioactive compounds with FDR less than 0.05, 20/20 (p = 2.5E-4) FDA breast cancer drugs and 150/173 (p = 3.1E-10) clinical drugs (compounds that have been in clinical trials for breast cancer, Additional file 5) were recalled. For prostate cancer, among 1668 bioactive compounds with FDR less than 0.05, 10/11 (p = 2.6E-2) FDA prostate cancer drugs and 89/113 (p = 6.3E-6) clinical drugs were recalled. Significance was calculated using the Fisher exact test.
Functional plausibility
One way to characterize the functional implications of breast cancer MAGs is by estimating the chance probability of their observed distribution over KEGG pathways. We took the MAGs (MAG-UP, see, Additional file 6) that produced the drug ranked lists with the highest AUC scores after optimization. The MAGs contain 40 breast cancer mutations and their 237 filtered first nearest neighbors on the FLN, which are up regulated in breast cancer (see Additional file 6).
As shown in Additional file 6, we found 95 pathways over-represented in breast cancer (FDR < 0.05), 18 of which are classified in KEGG as cancer pathways (22 of the 287 KEGG pathways, are labeled cancer-related). For example, [28] found that the spliceosome assembly pathway is enriched in genes that are overexpressed in breast cancer samples, compared to benign lesions. They have shown that siRNA-mediated depletion of SmE (SNRPE) or SmD1 (SNRPD1) led to a marked reduction of cell viability in breast cancer cell lines, whereas it had little effect on the survival of the nonmalignant MCF10A breast epithelial cells [29].
In addition, signaling pathways that regulate pluripotent stems cells are enriched in overexpressed genes that are in the functional neighborhood of genes mutated in breast cancer tissue (MAGs, p = 4E-09). The deregulation of these pathways many play a role in the development of chemoresistance of cancer stem cells, including breast cancer [30]. Other published breast cancer causal pathways such as Estrogen signaling [31], ErbB [32], neurotrophin [33], MAPK [34] and PI3K/AKT [35] were significantly enriched in mutation associated genes (MAGs).
A similar approach was followed for prostate cancer. As summarized in Additional file 6, we found 117 enriched pathways (FDR <0.05), 18 of which are KEGG cancer pathways, including the prostate cancer pathway (p = 6.9E-10). There was also supporting evidence that showed deregulation of the enriched pathways in prostate cancer. For example, T cell infiltration of the prostate induced by androgen withdrawal has been found in patients with prostate cancer [36]; the androgen-androgen receptor (AR) system plays vital roles in prostate cancer development and progression [37]. Insulin-like growth factor 1 or insulin signaling has been found to activate androgen signaling through direct interactions of Foxo1 with androgen receptors. Intervention of IGF1/insulin-phosphatidylinositol 3-kinase-Akt signaling was reported to be of clinical value for prostate cancer. T cell receptor, PI3K-Akt, FoxO, and insulin signaling pathways were highly ranked candidates with p < E-05.
A number of studies have shown that breast and prostate cancer are genetically related [38, 39], as are almost all cancers to various degrees. Our finding that breast and prostate cancer share 80 pathways is a striking illustration of this connection (see Additional file 6). We expect that the selected drug candidates having a strong functional relation (mutual predictability score) with this set of genes could potentially correct these aberrant functions.
MFM provides functional insight
We compared the functional information gained from MAGs with information obtained using disease differentially expressed genes (DEGs) (often referred to as disease signature genes) exclusively [19, 20]. As shown in Additional file 6, we found that our current method identifies more significantly enriched pathways and well-documented breast cancer and prostate cancer pathways than does the use of differential expression alone. To make a comparison, we mapped DEGs onto KEGG pathways. For breast cancer, one set contains the most up-regulated 247 DEGs; for prostate cancer, there were 333 up-regulated DEGs. The disease DEGs were generated from the expression data as explained in Transcript Level. These results taken collectively suggest that the inclusion of mutational and functional information into disease gene signatures, substantially improves prediction of disease mechanism and adds specificity and accuracy to the identification of repositioned candidates.
Repositioned drug candidates inhibit metabolism of breast cancer cells
We employed an MTT assay to assess cancer cell viability after treatments of 5 repositioned drug candidates (Table 2) [40]. In particular, we tested the viability of 2 breast cancer cell lines: MCF7 (Luminal A subtype), and SUM 149 (Triple negative, inflammatory breast cancer subtype). We assessed non specific drug toxicity by comparing the inhibition with that obtained against the immortalized but non-malignant MCF10A cell line.
As shown in Additional file 7: Figure S2, Additional file 8: Figure S3, Additional file 9: Figure S4, Additional file 10: Figure S5, Additional file 11: Figure S6 and Additional file 12: Figure-S7, MCF7, SUM149 and MCF10A cells exposed to increasing concentrations of drugs for 24 h exhibited a dose dependent reduction in viability. The important measure of efficacy is therapeutic index (TI), the IC50 of a drug when it targets a non-tumor cell line, relative to its IC50 when it targets a tumor cell line. As shown in Fig. 3, the TIs of candidates tested against MCF7 and SUM149 are all substantially higher than that of Doxorubicin. In addition, all drug candidates except for Triprolidine achieved maximum efficacy (Emax) at lower concentrations than did Doxorubicin.
a FDA approved indications of predicted drug candidates; b Half maximal inhibitory concentration (IC50) (μM) of predicted drug candidates and Doxorubicin against MCF7, SUM149 and MCF10A; c and d Therapeutic index (TI) and maximal inhibitory concentrations (Emax) of predicted repositioned drug candidates on MCF7, SUM149 and MCF10A. (*Currently used FDA drug for breast cancer; Therapeutic index (TI) was calculated as a ratio of the IC50 of MCF10A, to the IC50 of MCF7 and SUM149)
We developed a computational drug screening method -- based on the correlation between functional modules of genes perturbed by diseases and drugs -- that could potentially accelerate the introduction of new therapeutics for serious diseases and conditions. Our approach performed substantially better than previous methods by computational measures, and successfully predicted novel drugs that had higher inhibitory effect against breast cancer in vitro than Doxorubicin. The study benefited substantially from LINCS, the most up to date drug response expression data sets currently available.
A number of computational drug-repositioning methods that utilized CMap have been devised and the efficacy of identified drugs have been supported by in vivo [16, 19] experiments. However, the methodologies are exclusively based on gene expression, without taking disease driver/mutated genes or functional information between genes into account. Sirota, M., et al. [15] searched for drug candidates based on similarities between drug response gene signatures (DEG) and [12] predicted drug molecular functions based on drug response gene signatures.
Here we indicate a method that has taken this into account and shows better performance than previous methods that utilized solely DEGs. We also showed that there was more functional information gained from MAGs than significantly differentially expressed genes (DEGs). Therefore, we believe that the method could screen more effective therapeutics than previous methods.
Of the five drugs for which we did preliminary in vitro tests, they all have higher TI in both cell types than does Doxorubicin. Mefloquine is a lipophilic molecule that is an FDA-approved anti-malaria agent. It has 3 known protein targets: Fe(II)-protoporphyrin IX, hemoglobin subunit alpha, and A2A adenosine receptor (A2AR). Its antimalarial action is believed to result from inhibition of heme polymerization within the food vacuole in the blood stages of the malaria life cycle [41]. Its potential role as a cancer therapeutic; however, stems from its antagonistic action on A2AR [42].
A study has shown that antagonizing A2AR could provide a basis for cancer immunotherapy [43]. Preclinical studies have confirmed that blockade of A2a receptor activation has the ability to markedly enhance anti-tumor immunity and be effective against melanoma and lymphoma [44–46].
Tumors may evade immune repose by usurping pathways; such as adenosinergic signaling pathway, that negatively regulates immune response. Tumors and its microenvironment have been found to have high levels of adenosine and ATP, which is triggered by increased cellular turnover and hypoxia [43]. The extracellular adenosine then activates specific purinergic receptors such as A2AR. The activation of A2AR in cancer results in inhibition of the immune response to tumors via suppression of T regulatory cell function and inhibition of natural killer cell cytotoxicity and tumor-specific CD4+ and CD8+ T cell activity, therefore, inhibition of A2AR by specific antagonists may enhance anti-tumor immunity.
Immunosuppression is associated with hypoxia and accelerated cell turn over. In accordance with the findings, in our analysis of pathway enrichment of MAGs for breast cancer, cell cycle, HIF1 and T cell signaling pathways were significantly dysregulated in breast cancer. Therefore, Mefloquine, the A2aR antagonist could be applied as an effective immunotherapeutic strategy.
Fluphenazine and Thioridazine are both antipsychotics. The mechanism of action of fluphenazine is not well established, but it is known to antagonize dopamine by binding to the D2 receptor. Thioridazine binds a range of receptor types including dopamine and various serotonin receptor subtypes. The relationship to inhibition of transformed (MCF7 and SUM149) cells is not entirely obvious.
In our in vitro study, breast cancer cells (MCF7, SUM149 and MCF10A) had shown resistance against Doxorubicin. The Emax of Doxorubcin was higher than 4 out of 5 of our candidate drugs, which corresponds with the reported fact that breast cancer patients show drug resistance against Doxorubicin. It also suggests the ability of our drug candidate to overcome the drug resistance. The study [47] has found that Thioridazine antagonized dopamine receptors, which are expressed on cancer stem cells (CSC) and breast cancer cells, and could induce death of leukemia cancer stem cells preferentially without harming normal blood stem cells. The dopamine receptor pathway is known to regulate the growth of CSCs [48]. Therefore, Fluphnazine and Thioridazine could inhibit drug resistance of breast cancers by modulating CSC through dopamine receptor signaling pathway.
MFM, which utilizes a functional-linkage network, known mutations, and altered RNA levels, appears to be a promising method for identifying multi-targeted drug candidates that can correct aberrant cellular functions. In particular the computational performance exceeded that of other CMap-based methods, and in vitro experiments indicate that 5/5 candidates have therapeutic indices superior to that of Doxorubicin in MCF7 and SUM149 cancer cell lines. This new approach has the potential to provide a more efficient drug discovery pipeline.
A2AR, adenosine A2a receptor; AUC, area under the curve; CMap, connectivity map; CSC, cancer stem cells; DCUB, down regulated cancer genes up regulated bioactive compounds; DEG, differentially expressed genes; DMSO, Dimethylsulfoxide; DNA, deoxyribonucleic acid; DRG, drug response gene; EMax, maximal inhibitory concentration; FDA, Food and drug administration; FDR, false discovery rate; FLN, functional linkage network; GEO, gene expression omnibus; IC50, half maximal inhibitory concentration; KEGG, Kyoto encyclopedia of genes and genomes; LINCS, library of integrated network based cellular signatures; MAG, mutation associated gene; MFM, method of functional modules; MP, mutual predictability; MTT, 3-(4,5-Dimethylthiazol-2-Yl)-2,5-Diphenyltetrazolium Bromide; OMIM, online mendelian inheritance in man; RNA, ribonucleic acid; ROC, receiver operating characteristic; TCGA, the cancer genome atlas; TI, therapeutic index; UCDB, up regulated cancer genes down regulated bioactive compounds
Chong CR, Sullivan Jr DJ. New uses for old drugs. Nature. 2007;448(7154):645–6.
Kamb A, Wee S, Lengauer C. Why is cancer drug discovery so difficult? Nat Rev Drug Discov. 2007;6(2):115–20.
Kola I, Landis J. Can the pharmaceutical industry reduce attrition rates? Nat Rev Drug Discov. 2004;3(8):711–5.
DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J Health Econ. 2003;22(2):151–85.
Renaud RC, Xuereb H. Erectile-dysfunction therapies. Nat Rev Drug Discov. 2002;1(9):663–4.
Lin TS, Prusoff WH. Synthesis and biological activity of several amino analogues of thymidine. J Med Chem. 1978;21(1):109–12.
Shaughnessy AF. Old drugs, new tricks. BMJ. 2011;342:d741.
Khan SA, et al. Identification of structural features in chemicals associated with cancer drug response: a systematic data-driven analysis. Bioinformatics. 2014;30(17):i497–504.
Chu LH, Annex BH, Popel AS. Computational drug repositioning for peripheral arterial disease: prediction of anti-inflammatory and pro-angiogenic therapeutics. Front Pharmacol. 2015;6:179.
Li P, et al. Large-scale exploration and analysis of drug combinations. Bioinformatics. 2015;31(12):2007–16.
Zheng C, et al. Large-scale Direct Targeting for Drug Repositioning and Discovery. Sci Rep. 2015;5:11970.
Gottlieb A, et al. PREDICT: a method for inferring novel drug indications with application to personalized medicine. Mol Syst Biol. 2011;7:496.
Keiser MJ, et al. Predicting new molecular targets for known drugs. Nature. 2009;462(7270):175–81.
Ha S, et al. IDMap: facilitating the detection of potential leads with therapeutic targets. Bioinformatics. 2008;24(11):1413–5.
Campillos M, et al. Drug target identification using side-effect similarity. Science. 2008;321(5886):263–6.
Iorio F, et al. Discovery of drug mode of action and drug repositioning from transcriptional responses. Proc Natl Acad Sci U S A. 2010;107(33):14621–6.
Rogers FB. Medical subject headings. Bull Med Libr Assoc. 1963;51:114–6.
Hamosh A, et al. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res. 2005;33(Database issue):D514–7.
Sirota M, et al. Discovery and preclinical validation of drug indications using compendia of public gene expression data. Sci Transl Med. 2011;3(96):96ra77.
Lamb J, et al. The Connectivity Map: using gene-expression signatures to connect small molecules, genes, and disease. Science. 2006;313(5795):1929–35.
Dudley JT, et al. Computational repositioning of the anticonvulsant topiramate for inflammatory bowel disease. Sci Transl Med. 2011;3(96):96ra76.
Shigemizu D, et al. Using functional signatures to identify repositioned drugs for breast, myelogenous leukemia and prostate cancer. PLoS Comput Biol. 2012;8(2):e1002347.
Chung FH, et al. Functional Module Connectivity Map (FMCM): a framework for searching repurposed drug compounds for systems treatment of cancer and an application to colorectal adenocarcinoma. PLoS One. 2014;9(1):e86299.
Linghu B, et al. Genome-wide prioritization of disease genes and identification of disease-disease associations from an integrated human functional linkage network. Genome Biol. 2009;10(9):R91.
Vidovic D, Koleti A, Schurer SC. Large-scale integration of small molecule-induced genome-wide transcriptional responses, Kinome-wide binding affinities and cell-growth inhibition profiles reveal global trends characterizing systems-level drug action. Front Genet. 2014;5:342.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139–40.
Hung JH, et al. Identification of functional modules that correlate with phenotypic difference: the influence of network topology. Genome Biol. 2010;11(2):R23.
Andre F, et al. Exonic expression profiling of breast cancer and benign lesions: a retrospective analysis. Lancet Oncol. 2009;10(4):381–90.
Quidville V, et al. Targeting the deregulated spliceosome core machinery in cancer cells triggers mTOR blockade and autophagy. Cancer Res. 2013;73(7):2247–58.
Czerwinska P, Kaminska B. Regulation of breast cancer stem cell features. Contemp Oncol (Pozn). 2015;19(1A):A7–A15.
Saha Roy S, Vadlamudi RK. Role of estrogen receptor signaling in breast cancer metastasis. Int J Breast Cancer. 2012;2012:654698.
Britten CD. Targeting ErbB receptor signaling: a pan-ErbB approach to cancer. Mol Cancer Ther. 2004;3(10):1335–42.
Hondermarck H. Neurotrophins and their receptors in breast cancer. Cytokine Growth Factor Rev. 2012;23(6):357–65.
Roberts PJ, Der CJ. Targeting the Raf-MEK-ERK mitogen-activated protein kinase cascade for the treatment of cancer. Oncogene. 2007;26(22):3291–310.
Paplomata E, O'Regan R. The PI3K/AKT/mTOR pathway in breast cancer: targets, trials and biomarkers. Ther Adv Med Oncol. 2014;6(4):154–66.
Mercader M, et al. T cell infiltration of the prostate induced by androgen withdrawal in patients with prostate cancer. Proc Natl Acad Sci U S A. 2001;98(25):14565–70.
Fan W, et al. Insulin-like growth factor 1/insulin signaling activates androgen signaling through direct interactions of Foxo1 with androgen receptor. J Biol Chem. 2007;282(10):7329–38.
Lopez-Otin C, Diamandis EP. Breast and prostate cancer: an analysis of common epidemiological, genetic, and biochemical features. Endocr Rev. 1998;19(4):365–96.
Risbridger GP, et al. Breast and prostate cancer: more similar than different. Nat Rev Cancer. 2010;10(3):205–12.
van Meerloo J, Kaspers GJ, Cloos J. Cell sensitivity assays: the MTT assay. Methods Mol Biol. 2011;731:237–45.
Foley M, Tilley L. Quinoline antimalarials: mechanisms of action and resistance. Int J Parasitol. 1997;27(2):231–40.
Weiss SM, et al. Discovery of nonxanthine adenosine A2A receptor antagonists for the treatment of Parkinson's disease. Neurology. 2003;61(11 Suppl 6):S101–6.
Leone RD, Lo YC, Powell JD. A2aR antagonists: Next generation checkpoint blockade for cancer immunotherapy. Comput Struct Biotechnol J. 2015;13:265–72.
Waickman AT, et al. Enhancement of tumor immunotherapy by deletion of the A2A adenosine receptor. Cancer Immunol Immunother. 2012;61(6):917–26.
Beavis PA, et al. Blockade of A2A receptors potently suppresses the metastasis of CD73+ tumors. Proc Natl Acad Sci U S A. 2013;110(36):14711–6.
Ohta A, et al. A2A adenosine receptor protects tumors from antitumor T cells. Proc Natl Acad Sci U S A. 2006;103(35):13132–7.
Sachlos E, et al. Identification of drugs including a dopamine receptor antagonist that selectively target cancer stem cells. Cell. 2012;149(6):1284–97.
Vinogradov S, Wei X. Cancer stem cells and drug resistance: the potential of nanomedicine. Nanomedicine (Lond). 2012;7(4):597–615.
CD acknowledges funding from the NIH R01 GM103502-05.
The sources and information on how to access the raw datasets analysed in the study are specified in the Methods section. The datasets supporting the conclusions of this article are included within the article and its additional files.
Conceived and designed the experiments: CD ZH HC DS. Analyzed the data: HC. Wrote the paper: HC CD. All authors read and approved the final version of the manuscript.
The research does not involve human data.
Bioinformatics Program, College of Engineering, Boston University, Boston, MA, USA
Hsiao-Rong Chen
, Zhenjun Hu
& Charles DeLisi
Graduate Program in Translational Molecular Medicine, Boston University School of Medicine, Boston, MA, USA
Department of Environmental Health, Boston University School of Public Health, Boston, MA, USA
David H. Sherr
Department of Biomedical Engineering, Boston University, Boston, MA, USA
Charles DeLisi
Search for Hsiao-Rong Chen in:
Search for David H. Sherr in:
Search for Zhenjun Hu in:
Search for Charles DeLisi in:
Correspondence to Charles DeLisi.
is a table listing well-documented mutated genes for breast cancer, prostate cancer and leukemia. (XLS 889 kb)
and 2–1 show detailed process of MP score computation. (DOCX 106 kb)
An example of mutual predictability score computation. For ROC curve M-D (sensitivity plotted against 1-specificity), sensitivity and 1 – specificity are defined as follows: sensitivity = TP / (TP + FN), 1 - specificity = FP / (TN + FP), where TP is the number of DRG genes above a particular Si cutoff, TN is the number of genes associated with neither disease below the cutoff, FP is the number of genes associated with neither disease above the cutoff, and FN is the number of DRG genes below the cutoff. ROC curve D-M was plotted in the same way. The MP score (0.73) is defined as the geometric mean of area under the ROC M-D and ROC D-M curves: AUC M-D (0.81) and AUCD-M (0.65). (PPTX 299 kb)
shows detailed description of identified drug candidates for breast and prostate cancer. (XLSX 140 kb)
lists FDA-approved and clinical drugs for breast and prostate cancer. (XLSX 52 kb)
is a table listing MAGs for breast cancer and prostate cancer, also has the listing enriched KEGG pathways in breast cancer and prostate cancer. (XLSX 46 kb)
Titration curves of cell viability under treatment of Doxorubicin. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Doxorubicin with concentrations ranging from 0.5 μM to 200 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 6). (PPTX 53 kb)
Titration curves of cell viability under treatment of Mefloquine. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Mefloquine with concentrations ranging from 3.125 μM to 100 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 3). (PPTX 53 kb)
Titration curves of cell viability under treatment of Clotrimazole. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Clotrimazole with concentrations ranging from 3.125 μM to 100 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 3). (PPTX 53 kb)
Additional file 10: Figure S5
Titration curves of cell viability under treatment of Thioridazine. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Thioridazine with concentrations ranging from 3.125 μM to 100 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 3). (PPTX 54 kb)
Titration curves of cell viability under treatment of Fluphenazine. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Fluphenazine with concentrations ranging from 3.125 μM to 100 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 3). (PPTX 55 kb)
Titration curves of cell viability under treatment of Triprolidine. Viability of MCF10A, MCF7 and SUM 149 cells exposed to Triprolidine with concentrations ranging from 31.25 μM to 1000 μM after 24 h incubation. The relative viability was calculated as relative viability = (experimental absorbance - background absorbance)/ (absorbance of untreated controls - background absorbance of untreated controls) × 100 % (means ± SD, n = 3). (PPTX 55 kb)
Additional file 13:
is a table listing predicted drug candidates for breast and prostate cancer using LINCS dataset. (XLSX 1152 kb)
Chen, H., Sherr, D.H., Hu, Z. et al. A network based approach to drug repositioning identifies plausible candidates for breast cancer and prostate cancer. BMC Med Genomics 9, 51 (2016) doi:10.1186/s12920-016-0212-7
Computational drug repositioning
Functional and structural genomics | CommonCrawl |
Averaging of a 3D Lagrangian averaged Navier-Stokes-$\alpha$ model with oscillating external forces
CPAA Home
July 2011, 10(4): 1307-1314. doi: 10.3934/cpaa.2011.10.1307
Alternative proof for the existence of Green's function
Sungwon Cho 1,
Department of Mathematics Education, Gwangju National University of Education, 93 Pilmunlo Bugku, Gwangju 500-703, South Korea
Received November 2009 Revised September 2010 Published April 2011
We present a new method for the existence of a Green's function of nod-divergence form parabolic operator with Hölder continuous coefficients. We also derive a Gaussian estimate. Main ideas involve only basic estimates and known results without a potential approach, which is used by E.E. Levi.
Keywords: second order parabolic equation, fundamental solution, Gaussian estimate., Green's function.
Mathematics Subject Classification: Primary: 35K10; Secondary: 31B1.
Citation: Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307
A. Ancona, Principe de Harnack à la frontière et théorème de Fatou pour un opérateur elliptique dans un domaine lipschitzien,, Ann. Inst. Fourier (Grenoble), 28 (1978), 169. Google Scholar
D. Aronson, Non-negative solutions of linear parabolic equations,, Ann. Scuola Norm. Sup. Pisa, 22 (1968), 607. Google Scholar
P. Auscher, Regularity theorems and heat kernel for elliptic operators,, J. London Math. Soc., 54 (1996), 284. doi: 10.1112/jlms/54.2.284. Google Scholar
P. Bauman, Equivalence of the Green's functions for diffusion operators in $R^n$: a counterexample,, Proc. Amer. Math. Soc., 91 (1984), 64. doi: 10.1090/S0002-9939-1984-0735565-4. Google Scholar
P. Bauman, Positive solutions of elliptic equations in nondivergence form and their adjoints,, Ark. Mat., 22 (1984), 153. doi: 10.1007/BF02384378. Google Scholar
S. Cho, Two-sided global estimates of the Green's function of parabolic equations,, Potential Analysis, 25 (2006), 387. doi: doi:10.1007/s11118-006-9026-0. Google Scholar
R. Courant and D. Hilbert, "Methods of Mathematical Physics," Vol. II.,, reprint of the 1962 original, (1962). Google Scholar
E. B. Davies, "Heat Kernels and Spectral Theory,", Cambridge Univ. Press, (1989). doi: 10.1017/CBO9780511566158. Google Scholar
S. Èĭdel'man, "Parabolicheskie sistemy,", Izdat., (1964). Google Scholar
L. Escauriaza, Bounds for the fundamental solution of elliptic and parabolic equations in nondivergence form,, Comm. Partial Differential Equations, 25 (2000), 821. doi: 10.1080/03605300008821533. Google Scholar
E. Fabes, N. Garofalo and S. Salsa, A control on the set where a Green's function vanishes,, Colloq. Math., 60/61 (1990), 637. Google Scholar
A. Friedman, "Partial Differential Equations of Parabolic Type,", Prentice Hall, (1964). Google Scholar
A. Il'in, A. Kalašnikov and O. Oleĭnik, Second-order linear equations of parabolic type,, Uspehi Mat. Nauk, 17 (1962), 3. doi: 10.1070/RM1962v017n03ABEH004115. Google Scholar
H. Kalf, On E. E. Levi's method of constructing a fundamental solution for second-order elliptic equations,, Rend. Circ. Mat. Palermo, 41 (1992), 251. doi: 10.1007/BF02844669. Google Scholar
O. Ladyzhenskaya, V. Solonnikov and N. Ural'tseva, "Linear and Quasi-linear Equations of Parabolic Type,", Translated from the Russian by S. Smith. Translations of Mathematical Monographs, (1967). Google Scholar
E. Levi, Sulle equazioni lineari totalmente ellittiche alle derivate parziali.,, Rend. Circ. Mat. Palermo, 24 (1907), 275. Google Scholar
E. Levi, I problemi dei valori al contorno per le equazioni lineari totalmente ellittiche alle derivate parziali.,, Memorie Mat. Fis. Soc. Ital. Scienze (detta dei XL) \textbf{16} (1909) 3-113, 16 (1909), 3. Google Scholar
G. Lieberman, "Second Order Parabolic Differential Equations,", World Scientific, (1996). Google Scholar
V. Liskevich and Y. Semenov, Estimates for fundamental solutions of second-order parabolic equations,, J. London Math. Soc., 62 (2000), 521. doi: 10.1112/S0024610700001332. Google Scholar
E. Ouhabaz, "Analysis of Heat Equations on Domains,", London Mathematical Society Monographs Series \textbf{31}, 31 (2005). Google Scholar
F. Porper and S. Èĭdel'man, Two-sided estimates of the fundamental solutions of second-order parabolic equations and some applications of them,, Uspekhi Math. Nauk, 39 (1984), 107. doi: 10.1070/RM1984v039n03ABEH003164. Google Scholar
L. Saloff-Coste, "Aspects of Sobolev-type Inequalities,", London Mathematical Society Lecture Note Series \textbf{289}, 289 (2002). Google Scholar
P. Sjögren, On the adjoint of an elliptic linear differential operator and its potential theory,, Ark. Mat., 11 (1973), 153. doi: 10.1007/BF02388513. Google Scholar
W. Sternberg, Über die lineare elliptische Differentialgleichung zweiter Ordnung mit drei unabhängigen Veränderlichen,, Math. Z., 21 (1924), 286. doi: 10.1007/BF01187471. Google Scholar
Q. Zhang, The boundary behavior of heat kernels of Dirichlet Laplacians,, Journal of Differential Equations, 182 (2002), 416. doi: 10.1006/jdeq.2001.4112. Google Scholar
Hongjie Dong, Seick Kim. Green's functions for parabolic systems of second order in time-varying domains. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1407-1433. doi: 10.3934/cpaa.2014.13.1407
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Mourad Choulli. Local boundedness property for parabolic BVP's and the Gaussian upper bound for their Green functions. Evolution Equations & Control Theory, 2015, 4 (1) : 61-67. doi: 10.3934/eect.2015.4.61
Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007
Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791
Galina V. Grishina. On positive solution to a second order elliptic equation with a singular nonlinearity. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1335-1343. doi: 10.3934/cpaa.2010.9.1335
Boris P. Belinskiy, Peter Caithamer. Energy estimate for the wave equation driven by a fractional Gaussian noise. Conference Publications, 2007, 2007 (Special) : 92-101. doi: 10.3934/proc.2007.2007.92
Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001
Benjamin Seibold, Morris R. Flynn, Aslan R. Kasimov, Rodolfo R. Rosales. Constructing set-valued fundamental diagrams from Jamiton solutions in second order traffic models. Networks & Heterogeneous Media, 2013, 8 (3) : 745-772. doi: 10.3934/nhm.2013.8.745
Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Jiann-Sheng Jiang, Kung-Hwang Kuo, Chi-Kun Lin. Homogenization of second order equation with spatial dependent coefficient. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 303-313. doi: 10.3934/dcds.2005.12.303
Lucas Bonifacius, Ira Neitzel. Second order optimality conditions for optimal control of quasilinear parabolic equations. Mathematical Control & Related Fields, 2018, 8 (1) : 1-34. doi: 10.3934/mcrf.2018001
Walter Allegretto, Liqun Cao, Yanping Lin. Multiscale asymptotic expansion for second order parabolic equations with rapidly oscillating coefficients. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 543-576. doi: 10.3934/dcds.2008.20.543
Florian Schneider. Second-order mixed-moment model with differentiable ansatz function in slab geometry. Kinetic & Related Models, 2018, 11 (5) : 1255-1276. doi: 10.3934/krm.2018049
Anurag Jayswala, Tadeusz Antczakb, Shalini Jha. Second order modified objective function method for twice differentiable vector optimization problems over cone constraints. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 133-145. doi: 10.3934/naco.2019010
P. Álvarez-Caudevilla, J. D. Evans, V. A. Galaktionov. The Cauchy problem for a tenth-order thin film equation II. Oscillatory source-type and fundamental similarity solutions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 807-827. doi: 10.3934/dcds.2015.35.807
Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (2) : 315-330. doi: 10.3934/ipi.2018014
Changchun Liu. A fourth order nonlinear degenerate parabolic equation. Communications on Pure & Applied Analysis, 2008, 7 (3) : 617-630. doi: 10.3934/cpaa.2008.7.617
Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767
Sungwon Cho | CommonCrawl |
What Does Q Mean In Physics
How To Calculate Extension In Physics
How To Do Elimination In Math
What Is Electric Charge
What do you mean by Q value of a nuclear reaction? | 12 | ATOMS AND NUCLEI | PHYSICS | PRADEEP |…
In physics, charge, also known as electric charge, electrical charge, or electrostatic charge and symbolized q, is a characteristic of a unit of matter that expresses the extent to which it has more or fewer electrons than protons. In atoms, the electron carries a negative elementary or unit charge the proton carries a positive charge. The two types of charge are equal and opposite.
In an atom of matter, an electrical charge occurs whenever the number of protons in the nucleus differs from the number of electrons surrounding that nucleus. If there are more electrons than protons, the atom has a negative charge. If there are fewer electrons than protons, the atom has a positive charge. The amount of charge carried by an atom is always a multiple of the elementary charge, that is, the charge carried by a single electron or a single proton. A particle, atom, or object with negative charge is said to have negative electric polarity a particle, atom, or object with positive charge is said to have positive electric polarity.
An electric field, also called an electrical field or an electrostatic field, surrounds any object that has charge. The electric field strength at any given distance from an object is directly proportional to the amount of charge on the object. Near any object having a fixed electric charge, the electric field strength diminishes in proportion to the square of the distance from the object .
F = / (4
Example 2 Calculating The Force Exerted On A Point Charge By An Electric Field
What force does the electric field found in the previous example exert on a point charge of 0.250 C?
Since we know the electric field strength and the charge in the field, the force on that charge can be calculated using the definition of electric field \mathbf=\frac}\\ rearranged to F = qE.
The magnitude of the force on a charge q = 0.250 C exerted by a field of strength E = 7.20 × 105 N/C is thus,
\beginF& =& -qE\\\text& =& \left\left\\\text& =& 0.180\text\end\\
Because q is negative, the force is directed opposite to the direction of the field.
The force is attractive, as expected for unlike charges. The charges in this example are typical of common static electricity, and the modest attractive force obtained is similar to forces experienced in static cling and similar situations.
Lack Of Fractional Charges
Paul Dirac argued in 1931 that if magnetic monopoles exist, then electric charge must be quantized however, it is unknown whether magnetic monopoles actually exist. It is currently unknown why isolatable particles are restricted to integer charges much of the string theory landscape appears to admit fractional charges.
Read Also: Whatever Happened To Beth Thomas Child Of Rage
What Is The Meaning Of Magnitude In Physics
Any quantitys magnitude is a number that indicates how large or small a measurement of a physical quantity is in comparison to a given reference value .
In physics, magnitude is described in simple words as distance or quantity.
It shows the direction or size that is absolute or relative in which an object moves in the sense of motion.
It is used to describe the size or extent of something.
Generally, in physics, magnitude relates to distance or quantity. Magnitude defines the size of an entity, or its speed when moving, in comparison to motion.
The vector quantities are also related to their direction and magnitude. Displacement, velocity, acceleration, force, and other vector quantities are examples. The absolute value of a vector is referred to as its magnitude.
We can determine that two vectors are equal only if their magnitude and direction are the same. The magnitude of a vector changes when it is multiplied by a positive number, but the direction remains the same. A vectors magnitude and direction will both change if it is multiplied by a negative value.
Therefore, in physics, the magnitude of any quantity tells us how big a physical quantity or measurement is in comparison to some other quantitys reference value. A physical quantity is represented mathematically as a combination of a numerical value and a unit. The magnitude of a physical quantity is the numerical value associated with a measurement of that quantity.
Is Electric Charge A Vector Quantity
Electric charge is a scalar quantity. Apart from having a magnitude and direction, a quantity to be termed a vector should also obey the laws of vector addition, such as triangle law of vector addition and parallelogram law of vector addition only then the quantity is said to be a vector quantity. When two currents meet at a junction in the case of an electric current, the resultant current of these will be an algebraic sum and not the vector sum. Therefore, an electric current is a scalar quantity, although it possesses magnitude and direction.
Don't Miss: Mcdougal Littell Geometry Book Answers
What Are The 26 Science Terms
Possible answers include: A astronomy, B biology, C chemistry, D diffusion, E experiment, F fossil, G geology, H heat, I interference, J jet stream, K kinetic, L latitude, M motion, N neutron, O oxygen, P physics, Q quasar, R respiration, S solar system, T thermometer, U
Electric Charge And Coulomb's Law
there are two kinds of charge, positive and negative
like charges repel, unlike charges attract
positive charge comes from having more protons than electrons negative charge comes from having more electrons than protons
charge is quantized, meaning that charge comes in integer multiples of the elementary charge e
charge is conserved
Probably everyone is familiar with the first three concepts, but what does it mean for charge to be quantized? Charge comes in multiples of an indivisible unit of charge, represented by the letter e. In other words, charge comes in multiples of the charge on the electron or the proton. These things have the same size charge, but the sign is different. A proton has a charge of +e, while an electron has a charge of -e.
Electrons and protons are not the only things that carry charge. Other particles also carry charge in multiples of the electronic charge. Those are not going to be discussed, for the most part, in this course, however.
Putting "charge is quantized" in terms of an equation, we say:
q = n e
q is the symbol used to represent charge, while n is a positive or negative integer, and e is the electronic charge, 1.60 x 10-19 Coulombs.
Read Also: Beth Thomas Brother
Example 1 Calculating The Electric Field Of A Point Charge
Calculate the strength and direction of the electric field E due to a point charge of 2.00 nC at a distance of 5.00 mm from the charge.
We can find the electric field created by a point charge by using the equation E=\frac\\.
Here Q = 2.00 × 109 C and r = 5.00 × 103 m. Entering those values into the above equation gives
\beginE& =& k\frac\\\text& =& \left\times\frac\text\right)}\text\right)^2}\\\text& =& 7.19\times10^5\text\end\\
This electric field strength is the same at any point 5.00 mm away from the charge Q that creates the field. It is positive, meaning that it has a direction pointing away from the charge Q.
What Does A Gradient Mean In Physics
What do you mean by artificial transmutation ? Give examples . | 12 | ATOMIC NUCLEUS | PHYSICS …
I'm a physics high school student and have learned about the term 'gradient' regarding a few situations, such as pressure gradients and temperature gradients.
But what does this really mean? What is the physics meaning of gradient? I know that the pressure gradient is $\frac$ and the temperature gradient is $\frac$. If we take the $dx$, for instance, as an extremely small number, then the gradient approaches towards a very large value. What does this imply? Please explain in simple language!
2$\begingroup$Do not forget that P is a function of x, so if P is continuous, then if dx is small then dP is small too, the ratio is independent of the value of dx if dx is small enough.en.wikipedia.org/wiki/Derivative$\endgroup$ user126422Feb 24 '17 at 3:46
1Feb 24 '17 at 6:07
1$\begingroup$Please use mathjax to format mathematical expressions. To learn more about mathjax, please read MathJax basic tutorial and quick reference.$\endgroup$ YashasFeb 24 '17 at 6:23
$\begingroup$Appreciate the edits Yashas 🙂 I made a lazy choice not to, and that is on me.$\endgroup$
I struggled with the concept myself even in later calculus … which is a real problem when a meteorology major!
But one day it just dawned on me that it's as simple as it sounds. It's the rate of difference.
Gradient refers to how steep a line is, which is basically the slope.$\frac$ and $\frac$ are basically the derivative of a function, i.e its slope.
You May Like: Algebra Nation Section 4 Test Yourself Answers
Why Is Static Electricity More Apparent In Winter
You notice static electricity much more in winter than in summer because the air is much drier in winter than summer. Dry air is a relatively good electrical insulator, so if something is charged the charge tends to stay. In more humid conditions, such as you find on a typical summer day, water molecules, which are polarized, can quickly remove charge from a charged object.
What Is The Difference Between Electricity And Magnetism
Electricity can be present in a static charge, while magnetisms presence is only felt when there are moving charges as a result of electricity. In simple words, electricity can exist without magnetism, but magnetism cannot exist without electricity. What are the types of electricity? What are the sources of electricity?
You May Like: Michael Jackson's Kids Biological
What Does The 3 Dot Triangle Tattoo Mean
When the three dots tattoo is arranged in a triangular shape, it is commonly associated with prison life and criminality. The triangular three dots tattoo generally stands for the concept of mi vida loca, Spanish for my crazy life and is typically associated with the gang community and lengthy prison sentences.
From The Josephson And Von Klitzing Constants
Another accurate method for measuring the elementary charge is by inferring it from measurements of two effects in quantum mechanics: The Josephson effect, voltage oscillations that arise in certain superconducting structures and the quantum Hall effect, a quantum effect of electrons at low temperatures, strong magnetic fields, and confinement into two dimensions. The Josephson constant is
Don't Miss: Homework 2 Segment Addition Postulate Answer Key
What Does The Triangle Delta Mean In Physics
What does the triangle delta mean in physics? In general physics, delta-v is a change in velocity. The Greek uppercase letter is the standard mathematical symbol to represent change in some quantity. Depending on the situation, delta-v can be either a spatial vector or scalar .
What does the triangle delta mean? In trigonometry, lower-case delta represents the area of a triangle. B. Uppercase delta at oftentimes means change or the change in maths.
What does the symbol mean? A change in value. Often shown using the delta symbol: Example: x means the change in the value of x When we do simple counting the increment is 1, like this: 1,2,3,4,
What do the symbols and indicate? +: A symbol which indicates that an atom or region with a deficiency of electron density, often because of resonance delocalization, electronegativity differences, or inductive effects.
What Is The Difference Between Q And Q
4.1/5q and QQq
Also to know is, what is the difference between Q and Q in electricity?
Big Q represents the source charge which creates the electric field. Little q represents the test charge which is used to measure the strength of the electric field at a given location surrounding the source charge.
Also, why is charge denoted by Q? This predominance or deficiency of electrons, the principle we know as "charge," was also called the quantity of electricity." "E" referred to electrons, so "Q," after the first word of that phrase, came to represent charge. Wikipedia notes that the term 'quantity of electricity' was once common in scientific
Beside this, what is the Q in Coulomb's law?
Charge comes in multiples of an indivisible unit of charge, represented by the letter e. q is the symbol used to represent charge, while n is a positive or negative integer, and e is the electronic charge, 1.60 x 10-19Coulombs.
What is the constant q in physics?
1.602176634×1019 C. The elementary charge, usually denoted by e or sometimes qe, is the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge 1 e . This elementary charge is a fundamental physical constant.
You May Like: Fsa Algebra 1 Eoc Answers
In Terms Of The Avogadro Constant And Faraday Constant
If the Avogadro constantNA and the Faraday constantF are independently known, the value of the elementary charge can be deduced using the formula
e . }}}.}
" rel="nofollow"> mole of electrons, divided by the number of electrons in a mole, equals the charge of a single electron.)
This method is not how the most accurate values are measured today. Nevertheless, it is a legitimate and still quite accurate method, and experimental methodologies are described below.
The value of the Avogadro constant NA was first approximated by Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas. Today the value of NA can be measured at very high accuracy by taking an extremely pure crystal , measuring how far apart the atoms are spaced using X-ray diffraction or another method, and accurately measuring the density of the crystal. From this information, one can deduce the mass of a single atom and since the molar mass is known, the number of atoms in a mole can be calculated: NA = M/m.
The limit to the precision of the method is the measurement of F: the best experimental value has a relative uncertainty of 1.6 ppm, about thirty times higher than other modern methods of measuring or calculating the elementary charge.
What Does The Negative Sign Mean In The General Formula For The Work Done In Moving A Charge From One Point To Another In Any Electric Field A Only Negative Charges Are Being Considered To Be Moved In The Given Electric Field B The Charge Being Moved Always Loses Its Energy To The Surroundings C The Work Done In Moving The Charge Is Against The Electric Field D The Movement Of The Charges Are Always Oriented Such That It Moves To The Left Or Downwards
What do you mean by acceleration due to gravity? | 9 | GRAVITATION | PHYSICS | PRADEEP | Doubtnu…
What does the negative sign mean in the general formula for the work done in moving a charge from one point to another in any electric field?
a.) Only negative charges are being considered to be moved in the given electric field.
b.) The charge being moved always loses its energy to the surroundings.
c.) The work done in moving the charge is against the electric field.
d.) The movement of the charges are always oriented such that it moves to the left or downwards.
Read Also: Examples Of Movement In Geography
Distinguishing Temperature Heat And Internal Energy
Using the kinetic theory, a clear distinction between these three properties can be made.
Temperature is related to the kinetic energies of the molecules of a material. It is the average kinetic energy of individual molecules.
Internal energy refers to the total energy of all the molecules within the object. It is an extensive property, therefore when two equal-mass hot ingots of steel may have the same temperature, but two of them have twice as much internal energy as one does.
Finally, heat is the amount of energy flowing from one body to another spontaneously due to their temperature difference.
It must be added, when a temperature difference does exist heat flows spontaneously from the warmer system to the colder system. Thus, if a 5 kg cube of steel at 100°C is placed in contact with a 500 kg cube of steel at 20°C, heat flows from the cube at 300°C to the cube at 20°C even though the internal energy of the 20°C cube is much greater because there is so much more of it.
A particularly important concept is thermodynamic equilibrium. In general, when two objects are brought into thermal contact, heat will flow between them until they come into equilibrium with each other.
Internal energymicroscopic scaleUU
U = Upot + Ukin
The microscopic potential energy, Upot, involves the chemical bonds between the atoms that make up the molecules, binding forces in the nucleus and also the physical force fields within the system .
What Is The Quantum Of Charge
All known elementary particles, including quarks, have charges that are integer multiples of 1/3 e. Therefore, one can say that the "quantum of charge" is 1/3 e. In this case, one says that the "elementary charge" is three times as large as the "quantum of charge".
On the other hand, all isolatable particles have charges that are integer multiples of e. Therefore, one can say that the "quantum of charge" is e, with the proviso that quarks are not to be included. In this case, "elementary charge" would be synonymous with the "quantum of charge".
In fact, both terminologies are used. For this reason, phrases like "the quantum of charge" or "the indivisible unit of charge" can be ambiguous unless further specification is given. On the other hand, the term "elementary charge" is unambiguous: it refers to a quantity of charge equal to that of a proton.
You May Like: Bridge To Algebra Pizzazz
Previous articleScratch Mit Edu Geometry Dash
Next articleGeometry Homework Chapter 12 Prisms And Cylinders Practice Worksheet
What Is The Definition Of Frequency In Physics
How To Find Displacement In Physics
What Is Internal Energy In Physics | CommonCrawl |
Mathematical Biosciences and Engineering, 2019, 16(5): 4107-4121. doi: 10.3934/mbe.2019204
Research article Special Issues
A theta-scheme approximation of basic reproduction number for an age-structured epidemic system in a finite horizon
Wenjuan Guo, Ming Ye, Xining Li, Anke Meyer-Baese, Qimin Zhang
1 School of Mathematics and Statistics, Ningxia University, Yinchuan, 750021, P.R. China
2 Department of Earth, Ocean, and Atmospheric Science and Department of Scientific Computing, Florida State University, Tallahassee, FL 32306, United States
3 Department of Scientific Computing, Florida State University, Tallahassee, FL 32306-4120, United States
Special Issues: Recent advances of mathematical modeling and computational methods in cell and developmental biology
Abstract Full Text(HTML) Figure/Table Related pages
This paper focuses on numerical approximation of the basic reproduction number $\mathcal{R}_0$ , which is the threshold defined by the spectral radius of the next-generation operator in epidemiology. Generally speaking, $\mathcal{R}_0$ cannot be explicitly calculated for most age-structured epidemic systems. In this paper, for a deterministic age-structured epidemic system and its stochastic version, we discretize a linear operator produced by the infective population with a theta scheme in a finite horizon, which transforms the abstract problem into the problem of solving the positive dominant eigenvalue of the next-generation matrix. This leads to a corresponding threshold $\mathcal{R}_0$,n . Using the spectral approximation theory, we obtain that $\mathcal{R}_0$,n → $\mathcal{R}_0$ as n → +∞. Some numerical simulations are provided to certify the theoretical results.
1. X. Zhang, D. Jiang, A. Alsaedi, et al., Stationary distribution of stochastic SIS epidemic model with vaccination under regime switching, Appl. Math. Lett., 59 (2016), 87–93.
2. P. Driessche and J. Watmough, A simple SIS epidemic model with a backward bifurcation, J. Math. Biol., 40 (2000), 525–540.
3. W. Guo, Y. Cai, Q. Zhang, et al., Stochastic persistence and stationary distribution in an SIS epidemic model with media coverage, Physica A, 492 (2018), 2220–2236.
4. J. Pan, A. Gray, D. Greenhalgh, et al., Parameter estimation for the stochastic SIS epidemic model, J. Stat. Inference Stoch. Process, 17 (2014), 75–98.
5. Y. Cai, Y. Kang and W. Wang, A stochastic SIRS epidemic model with nonlinear incidence rate, Appl. Math. Comput., 305 (2017), 221–240.
6. S. Busenberg, M. Iannelli and H. Thieme, Global behavior of an age-structured epidemic model, Siam J. Math. Anal., 22 (1991), 1065–1080.
7. B. Cao, H. Huo and H. Xiang, Global stability of an age-structure epidemic model with imperfect vaccination and relapse, Physica A, 486 (2017), 638–655.
8. H. Inaba, Age-structured population dynamics in demography and epidemiology, Springer, Sin- gapore, 2017.
9. T.Kuniya, Globalstabilityanalysiswithadiscretizationapproachforanage-structuredmultigroup SIR epidemic model, Nonlinear Anal. Real World Appl., 12 (2011), 2640–2655.
10. K. Toshikazu, Numerical approximation of the basic reproduction number for a class of age- structured epidemic models, Appl. Math. Lett., 73 (2017), 106–112.
11. H. Inaba, Threshold and stability results for an age-structured epidemic model, J. Math. Biol., 28 (1990), 411–434.
12. O. Diekmann, J. Heesterbeek and J. Metz, On the definition and the computation of the basic reproduction ratio R 0 , in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365–382.
13. T. Kuniya and R. Oizumi, Existence result for an age-structured SIS epidemic model with spatial diffusion, Nonlinear Anal. Real World Appl., 23 (2015), 196–208.
14. N. Bacaër, Approximation of the basic reproduction number R 0 for Vector-Borne diseases with a periodic vector population, B. Math. Biol., 69 (2007), 1067–1091.
15. Z. Xu, F. Wu and C. Huang, Theta schemes for SDDEs with non-globally Lipschitz continuous coefficients, J. Comput. Appl. Math., 278 (2015), 258–277.
16. X. Mao and L. Szpruch, Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally Lipschitz continuous coefficients, J. Comput. Appl. Math., 238 (2013), 14–28.
17. F. Chatelin, The spectral approximation of linear operators with applications to the computation of eigenelements of differential and integral operators, Siam Rev., 23 (1981), 495–522.
18. A. Berman and R. Plemmons, Nonnegative matrices in the mathematical sciences, Academic press, New York, 1979.
19. K. Ito and F. Kappel, The Trotter-Kato theorem and approximation of PDEs, Math. Comput., 67 (1998), 21–44.
20. B. Pagter, Irreducible compact operators, Math. Z., 192 (1986), 149–153.
21. M. Krein, Linear operators leaving invariant acone in a Banach space, Amer. Math. Soc. Transl.,10 (1962), 3–95.
22. W. Guo, Q. Zhang, X. Li, et al., Dynamic behavior of a stochastic SIRS epidemic model with media coverage, Math. Method. Appl. Sci., 41 (2018), 5506-5525.
23. C. Mills, J. Robins and M. Lipsitch, Transmissibility of 1918 pandemic influenza, Nature, 432 (2004), 904–906.
© 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0) | CommonCrawl |
communications biology
We're hiring! See our job advert for more details.
Genetic analyses of human fetal retinal pigment epithelium gene expression suggest ocular disease mechanisms
Boxiang Liu ORCID: orcid.org/0000-0002-2595-44631 na1,
Melissa A. Calton2 na1,
Nathan S. Abell2,
Gillie Benchorin2,
Michael J. Gloudemans ORCID: orcid.org/0000-0002-9924-99433,
Ming Chen2,
Jane Hu4,
Xin Li ORCID: orcid.org/0000-0002-2122-74615,
Brunilda Balliu5,
Dean Bok4,
Stephen B. Montgomery ORCID: orcid.org/0000-0002-5200-39032,5 &
Douglas Vollrath2
Communications Biology volume 2, Article number: 186 (2019) Cite this article
The retinal pigment epithelium (RPE) serves vital roles in ocular development and retinal homeostasis but has limited representation in large-scale functional genomics datasets. Understanding how common human genetic variants affect RPE gene expression could elucidate the sources of phenotypic variability in selected monogenic ocular diseases and pinpoint causal genes at genome-wide association study (GWAS) loci. We interrogated the genetics of gene expression of cultured human fetal RPE (fRPE) cells under two metabolic conditions and discovered hundreds of shared or condition-specific expression or splice quantitative trait loci (e/sQTLs). Co-localizations of fRPE e/sQTLs with age-related macular degeneration (AMD) and myopia GWAS data suggest new candidate genes, and mechanisms by which a common RDH5 allele contributes to both increased AMD risk and decreased myopia risk. Our study highlights the unique transcriptomic characteristics of fRPE and provides a resource to connect e/sQTLs in a critical ocular cell type to monogenic and complex eye disorders.
The importance of vision to humans and the accessibility of the eye to examination have motivated the characterization of more than one thousand genetic conditions involving ocular phenotypes1. Among these, numerous monogenic diseases exhibit considerable inter-familial and intra-familial phenotypic variability2,3,4,5,6,7. Imbalance in allelic expression of a handful of causative genes has been documented8, but few common genetic variants responsible for such effects have been discovered.
Complementing our knowledge of numerous monogenic ocular disorders, recent genome-wide association studies (GWAS)9 have identified hundreds of independent loci associated with polygenic ocular phenotypes such as age-related macular degeneration (AMD), the leading cause of blindness in elderly individuals in developed countries10,11, and myopia, the most common type of refractive error worldwide and an increasingly common cause of blindness12,13,14. Despite the rapid success of GWAS in mapping novel ocular disease susceptibility loci, the functional mechanisms underlying these associations are often obscure.
Connecting changes in molecular functions such as gene expression and splicing with specific GWAS genomic variants has aided the elucidation of functional mechanisms. Non-coding variants account for a preponderance of the most significant GWAS loci15,16, and most expression quantitative trait loci (eQTLs) map to non-coding variants17. Thousands of eQTLs have been found in a variety of human tissues18, but ocular cell-types are underrepresented among eQTL maps across diverse tissues.
The retinal pigment epithelium (RPE) is critical for eye development19 and for an array of homeostatic functions essential for photoreceptors20. Variants of RPE-expressed genes have been associated with both monogenic and polygenic ocular phenotypes, including AMD and myopia. We recently implicated an eQTL associated with an RPE-expressed gene as modulating the severity of inherited photoreceptor degeneration in mice21.
To investigate the potential effects of genetically encoded common variation on human RPE gene expression, we set out to identify eQTLs and splice quantitative trait loci (sQTLs) for human fetal RPE (fRPE) cells cultured under two metabolic conditions. Here we describe hundreds of loci of each type, some of which are condition-specific, and connect the mitochondrial oxidation of glutamine with increased expression of lipid synthesis genes, a pathway important in AMD. We find that common variants near genes with disproportionately high fRPE expression explain a larger fraction of risk for both AMD and myopia than variants near genes enriched in non-ocular tissues. We show that a particular variant in RDH5 is associated with increased skipping of a coding exon, nonsense-mediated decay (NMD) of the aberrant transcript, and three-fold lower minor allele-specific expression. The e/sQTL marked by this variant colocalizes with high statistical significance with GWAS loci for both AMD and myopia risk, but with opposing directions of effect. Our study lays a foundation for linking e/sQTLs in a critical ocular cell type to mechanisms underlying monogenic and polygenic eye diseases.
The transcriptome of human fRPE cells
We studied 23 primary human fRPE lines (Supplementary Data 1), all generated by the same method in a single laboratory22 and cultured for at least 10 weeks under conditions that promote a differentiated phenotype23. DNA from each line was genotyped at 2,372,784 variants. Additional variants were imputed and phased using Beagle v4.124 against 1000 Genomes Phase 325 for a total of ~13 million variants after filtering and quality control (see Methods section). Comparison of fRPE chromosome 1 genotypes to those of 104 samples from 1000 Genomes indicated that our cohort is mostly African American in origin, with 4 samples of European ancestry (Supplementary Fig. 1).
Our goal was to identify RPE eQTLs relevant to the tissue's role in both developmental and chronic eye diseases. The balance between glycolytic and oxidative cellular energy metabolism changes during development and differentiation26, and loss of RPE mitochondrial oxidative phosphorylation capacity may contribute to the pathogenesis of AMD27, among other mechanisms. We therefore obtained transcriptional profiles of each fRPE line cultured in medium that favors glycolysis (glucose plus glutamine) and in medium that promotes oxidative phosphorylation (galactose plus glutamine)28. We performed 75-base paired-end sequencing to a median depth of 52.7 million reads (interquartile range: 45.5 to 60.1 million reads) using a paired sample design to minimize batch effects in differential expression analysis (Supplementary Data 2). To determine the relationship between primary fRPE and other tissues, we visualized fRPE in the context of 53 tissues from the GTEx Project v718. The fRPE samples formed a distinct cluster situated between heart and skeletal muscle and brain (Fig. 1a), tissues that, like the RPE, are metabolically active and capable of robust oxidative phosphorylation.
Characteristics of the fRPE transcriptome. a Multidimensional scaling against GTEx tissues locates fRPE near heart, skeletal muscle, and brain samples. b A subset of the fRPE-selective gene set defined by z-score >4 is shown including RPE signature genes such as RPE65 and new genes such as TYR. Red/pink dots indicate fRPE-selective genes with z-score >4 in both glucose and galactose conditions. c, d Two examples of the expression levels of fRPE-selective genes in various GTEx tissues. Only the top 25 tissues are plotted for visual clarity. For a, c, and d, red indicates fRPE glucose condition and blue indicates fRPE galactose condition. For c and d, each element of the boxplot is defined as follows: centerline, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range
To identify genes with disproportionately high levels of expression in the fRPE, we compared the median reads per kilobase of transcript per million mapped reads (RPKMs) of fRPE genes against GTEx tissues. We defined fRPE-selective genes as those with median expression at least four standard deviations above the mean (see Methods section). Under this definition, we found 100 protein-coding genes and 30 long non-coding RNAs (lncRNAs) to be fRPE-selective (Fig. 1b and Supplementary Data 3). Multiple previously defined RPE "signature" genes29,30,31 are present in our list including RPE65 (Fig. 1c) and RGR (Fig. 1d). Using this set of genes, we performed Gene Set Enrichment Analysis (GSEA)32 against 5,917 gene ontology (GO) annotations33. The two gene sets most enriched with fRPE-selective genes were pigment granule and sensory perception of light stimulus (FDR < 1 × 10−3), consistent with the capacity of fRPE to produce melanin and the tissue's essential role in the visual cycle. Supplementary Data 4 lists the 29 GO pathways enriched using a conservative FWER < 0.05. Recurrent terms in enriched pathway annotations such as pigmentation, light, vitamin, protein translation, endoplasmic reticulum and cellular energy metabolism suggest specific functions that are central to fRPE and outer retinal homeostasis.
Transcriptomic differences across two metabolic conditions
To gain insight into the response of fRPE cells to altered energy metabolism, we compared gene expression between the two culture conditions using DESeq234, correcting for sex, ancestry, RIN, and batch (see Methods section). A total of 837 protein coding and lncRNA genes showed evidence of significant differential expression (FDR < 1 × 10−3, Fig. 2a and Supplementary Data 5). Notably, three of the top ten differentially expressed genes are involved in lipid metabolism (SCD, INSIG1, and HMGCS2 in order). SCD codes for a key enzyme in fatty acid metabolism35, and its expression in RPE is regulated by retinoic acid36. INSIG1 encodes an insulin-induced protein that regulates cellular cholesterol concentration37. HMGCS2 encodes a mitochondrial enzyme that catalyzes the first step of ketogenesis38, and this enzyme plays a crucial role in phagocytosis-dependent ketogenesis in fRPE39. To understand the broader impact induced by changes in energy metabolism, we performed pathway enrichment analysis using GSEA32 and found that the top two upregulated pathways in galactose medium are cholesterol homeostasis and mTORC1 signaling (FDR < 1 × 10−4, Fig. 2b). Consistent with the cholesterol finding, forcing cells to rely primarily on oxidation of glutamine for ATP generation increases expression of a suite of genes that promotes lipid synthesis and import (Fig. 2c).
Differential expression across two metabolic conditions. a Transcriptome-wide differential expression patterns: red indicates upregulated in glucose, blue indicates upregulated in galactose. b Gene set enrichment analysis of differentially expressed genes. The pathway most enriched is cholesterol homeostasis (upregulated in galactose condition). c Key genes involved in cholesterol biosynthesis and import are upregulated in response to the increased oxidation of glutamine that occurs in the galactose condition. Estimated FDR values are shown next to the gene names
fRPE-selective genes are enriched in genetic ocular diseases
Disease-associated genes can have elevated expression levels in effector tissues40. To determine whether ocular disease genes have elevated expression levels in fRPE, we used a manually curated list of 257 ocular disease-related genes41 (see Methods section). Compared to all other protein-coding genes, ocular disease-related genes are more specific to fRPE (two-sided t-test p-value: 1.6 × 10−10). Further, ocular disease gene expression demonstrated a higher specificity to fRPE than to GTEx tissues (Fig. 3a), suggesting fRPE as a model system for a number of eye diseases. As a control, we repeated the analysis for epilepsy genes (n = 189) and observed elevated expression levels in brain tissues as we expected (Supplementary Fig. 2).
fRPE-selective genes are enriched in monogenic and polygenic diseases. a Genes causal for inherited retinal disorders (IRD) have elevated expression in fRPE. b, c Variants near RPE-selective genes explain a larger proportion of AMD (b) and myopia (c) risk than those near GTEx tissue-selective genes. The red bar represents the top 500 fRPE-selective genes
Unlike Mendelian ocular diseases, polygenic ocular disorders are characterized by variants with smaller effect sizes scattered throughout the genome. Using two well-powered GWAS of AMD42 and myopia43, we performed stratified linkage disequilibrium (LD) score regression to determine the heritability explained by fRPE. Using a previously established pipeline44, we selected the top 500 tissue-enriched genes for fRPE and various GTEx tissues and assigned variants within one kilobase of these genes to each tissue (see Methods section). Risk variants for both AMD and myopia were more enriched around fRPE-selective genes than GTEx tissue-selective genes (Fig. 3b, c). As an assessment of the robustness of the LD score regression results, we repeated the analysis with the top 200 and 1000 tissue-specific genes. A high ranking for fRPE was consistent across all three cutoffs (Supplementary Fig. 3).
e/sQTL discovery
To determine the genetic effects on gene expression in fRPE, we used RASQUAL45 to map eQTLs by leveraging both gene-level and allele-specific count information to boost discovery power. Multiple-hypothesis testing for both glucose and galactose conditions was conducted jointly with a hierarchical procedure called TreeQTL46. At FDR < 0.05, we found 687 shared, 264 glucose-specific, and 166 galactose-specific eQTLs (Table 1, Supplementary Data 6 and 7, Fig. 4a and Supplementary Figs. 4 and 5). An example of a shared eQTL is RGR (Fig. 4d), which encodes a G protein-coupled receptor that is mutated in retinitis pigmentosa47. An example of a glucose-specific eQTL is ABCA1 (Fig. 4b), which encodes an ATP-binding cassette transporter that regulates cellular cholesterol efflux48. Common variants near ABCA1 have been associated with glaucoma49 and AMD42. An example of a galactose-specific eQTL is PRPF8 (Fig. 4c), which encodes a splicing factor50. PRPF8 mutations are a cause of autosomal dominant retinitis pigmentosa51 and lead to RPE dysfunction in a mouse model52.
Table 1 Expression QTL discoveries
Landscape of genetic regulation of RPE gene expression. a We discovered 687, 264, and 166 eQTLs that are shared, glucose-specific, and galactose-specific, respectively. Comparison with GTEx eGenes revealed three shared eGenes that are currently unique to fRPE. b A glucose-specific eQTL in ABCA1. c A galactose-specific eQTL in PRPF8. d A shared eQTL in RGR. The y-axis of panels b–d denotes normalized expression values. e–g Evidence for fRPE-specificity for three eQTLs compared to GTEx. Black dashed lines indicate FDR = 0.1. Minor alleles are indicated by lowercase. For b, c, and d, each element of the boxplot is defined as follows: centerline, median; box limits, upper and lower quartiles; whiskers, 1.5× interquartile range
Differential expression alone is unlikely to account for the condition-specific nature of the eQTLs we identified because only about a quarter are differentially expressed (FDR < 0.05) and almost all of these exhibit an absolute fold change of less than two. Rather, it is likely that regulatory specificity is the underlying cause of these eQTLs. We therefore used HOMER53 to identify transcription factor binding motifs enriched around metabolic-specific eQTLs (see Methods section). Two motifs, TEAD1 (p < 1 × 10−6) and ZEB1 (p < 1 × 10−3), are among the top five motifs in the galactose condition (Supplementary Data 8). TEAD1 is known to play a role in aerobic glycolysis reprogramming54, and ZEB1 is known to render cells resistant to glucose deprivation55. We did not find enriched motifs in the glucose condition for transcription factors with well-known metabolic functions.
We compared fRPE to GTEx eGenes using a previously established two-step FDR approach56. We used fRPE-shared eGenes (FDR < 0.05 in both metabolic conditions) as the discovery set to remove any treatment-dependent regulatory effect, and used GTEx eGenes with a relaxed threshold (FDR < 0.1) as the replication set. eGenes from the discovery set not recapitulated in the replicated set were defined as fRPE-selective eGenes. This approach returned three genes (Fig. 4e–g): TYR, encoding an oxidase controlling the production of melanin; CRX, encoding a transcription factor critical for photoreceptor differentiation; and MFRP, encoding a secreted WNT ligand important for eye development. The TYR eQTL maps to a variant (rs4547091) previously described as located in an OTX2 binding site and responsible for modulating TYR promoter activity in cultured RPE cells57. All three genes are also fRPE-selective genes (Fig. 1), suggesting that apparent regulatory specificity is the by-product of expression selectivity. We also compared our eGenes to the EyeGEx database58. Among the 687 eGenes shared across both conditions, 498 (72.5%) are also eGenes reported in EyeGEx.
We also assessed the genetic effect on splicing by quantifying intron usages with LeafCutter59 and mapping splicing quantitative trait loci (sQTL) with FastQTL60 in permutation mode to obtain intron-level p-values. Following an established approach59, we used a conservative Bonferroni correction across introns within each intron cluster and calculated FDR across cluster-level p-values (see Methods section). We found 210 and 193 sQTLs at FDR < 0.05 for glucose and galactose conditions, respectively (Table 2, Supplementary Data 9 and 10). The top sQTL in the glucose condition regulates splicing in ALDH3A2 (FDR < 2.06 × 10−9), which codes for an aldehyde dehydrogenase isozyme involved in lipid metabolism61. Mutations in this gene cause Sjogren-Larsson syndrome62, which can affect the macular RPE63. The top sQTL in the galactose condition regulates splicing of transcripts encoding CAST, a calcium-dependent protease inhibitor involved in the turnover of amyloid precursor protein64.
Table 2 Splicing QTL discoveries
Fine mapping of complex ocular disease risk loci
To assess whether specific instances of GWAS signals can be explained by eQTL or sQTL signals, we performed colocalization analysis with a modified version of eCAVIAR65 (see Methods section). All variants within a 500-kilobase window around any GWAS (p-value < 1 × 10−4) or QTL (p-value < 1 × 10−5) signal were used as input to eCAVIAR, and any locus with colocalization posterior probability (CLPP) >0.01 was considered significant. To identify condition-specific colocalization events, we ran eCAVIAR separately for two metabolic conditions (see Methods section). For the AMD GWAS, we identified four eQTL colocalization events for each condition (Supplementary Fig. 6). One of these, WDR5, demonstrates glucose-specific colocalization (CLPP: glucose = 0.033 and galactose = 0.002). For the myopia GWAS, we identified three and seven colocalization events for galactose and glucose conditions, respectively. Three are condition specific (PDE3A, ETS2, and ENTPD5; Supplementary Fig. 7). For example, PDE3A, shows galactose-specific colocalization (CLPP: glucose = 0.0004; galactose = 0.014). eQTLs at PARP12 and CLU colocalized with AMD and myopia signals, respectively, under both conditions (Fig. 5a, d, Supplementary Figs. 6 and 7). While neither locus reached genome-wide significance in the respective GWAS, the significant co-localizations we describe implicate PARP12 and CLU as new candidate genes for these disorders.
Fine mapping of disease-associated variants using fRPE gene regulation. a Colocalization posterior probability for fRPE e/sQTLs with AMD. b, c Scatter plots demonstrate clear colocalization between AMD GWAS signal at rs3138141 and RDH5 eQTL (b) and sQTL (c). d Colocalization posterior probability for fRPE e/sQTLs with myopia. e, f Scatter plots demonstrate clear colocalization between myopia GWAS signal at rs3138141, the same variant identified for AMD, and RDH5 eQTL (e) and sQTL (f). a–f Colocalization results are with glucose QTLs. Galactose QTL colocalizations can be found in Figs. S18–19. g Relative allelic expression estimated by RASQUAL with 95% confidence intervals is shown. h Increased skipping of RDH5 exon 3 (middle black rectangle) is associated with the minor allele at rs3138141. The average read counts are shown for three splice junctions in groups of fRPE cells with different genotypes. The proportion of counts for all three sites for a given junction and genotype is shown in parenthesis. Exon and intron lengths are not drawn to scale. Minor alleles are indicated by lowercase. i Gel image showing RHD5 normal isoform amplified from CHX or DMSO treated ARPE-19 cells. j Gel image showing RHD5 mis-spliced isoform amplified from CHX or DMSO treated ARPE-19 cells. k Relative fold change between CHX and DMSO treatments for normal and mis-spliced RNA isoforms. Error bars indicate standard error of the mean for n = 3 independent experiments. *p < 0.05
Among the four genes exceeding our threshold for eQTL and AMD GWAS colocalization, RDH5, encoding a retinol dehydrogenase that catalyzes the conversion of 11-cis retinol to 11-cis retinal in the visual cycle66, showed the most significant signal (Fig. 5a and Supplementary Data 11). RHD5 was previously suggested as an AMD candidate gene42, but no mechanism was proposed. Two tightly linked AMD-associated variants (rs3138141 and rs3138142, r2 = 0.98) are highly correlated with RDH5 expression (Fig. 5b). The minor haplotype identified by the rs3138141 "a" allele is associated with a significantly smaller percentage of total RDH5 expression (26.4%) than the major haplotype identified by the "C" allele (73.6%) (Fig. 5g). We found no evidence for an effect on transcripts from the adjacent BLOC1S1 gene or on BLOC1S1-RDH5 read-through transcripts. The same variants mark an RDH5 sQTL (Fig. 5a, c) associated with differences in the usage of exon 3 of the transcript; samples that are heterozygous at rs3138141 (Ca) exhibit an average of more than three times the amount of exon 3 skipping compared to CC homozygous samples (Fig. 5h and Supplementary Fig. 8). The same e/sQTL also colocalized with a myopia GWAS signal (Fig. 5d–f, Supplementary Data 12), suggesting a mechanism for the prior association of the RDH5 locus with myopia67 and refractive error13.
NMD as a putative mechanism underlying an RDH5 eQTL
The association of the rs3138141/2 minor haplotype with both an RDH5 eQTL and sQTL suggests a mechanistic relationship. We estimate that ~80% of isoforms transcribed from the "C" haplotype are normal, whereas ~75% of isoforms transcribed from the "a" haplotype are mis-spliced (see Methods section). The increased skipping of exon 3 (out of 5) associated with the minor haplotype results in more transcripts with a frameshift and a premature termination codon (PTC) near the 5′ end of exon 4. Many mammalian transcripts with PTCs are subject to NMD, particularly when the PTC is not located in the last exon68. Treatment of cells with protein synthesis inhibitors such as cycloheximide (CHX) has been shown to increase the abundance of transcripts subject to NMD69. To assess a possible role for NMD in the stability of RDH5 transcripts, we treated differentiated immortalized human RPE cells (ARPE-19) with CHX and quantified the abundance of the normal and skipped exon 3 isoforms by RT-PCR. CHX caused a significant increase in the abundance of the skipped exon 3 isoform as compared to the normal (Fig. 5i–k and Supplementary Fig. 9). These data are consistent with a model in which the minor allele promotes the formation of an aberrant RDH5 mRNA that is subject to NMD, leading to an overall reduction in the steady state levels of RDH5 transcripts.
The importance of the RPE for development and lifelong homeostasis of the eye has motivated numerous studies of the RPE transcriptome. Several of these studies proposed similar sets of RPE "signature" genes, the largest of which comprises 171 genes29,30,31. Only 23 of these genes are present among our group of 100 fRPE-selective protein-coding genes. Our approach of comparing fRPE expression levels to GTEx data, which almost exclusively derive from adult autopsy tissue specimens, may have captured genes highly expressed in cultured and/or fetal cells. Absence in GTEx of pure populations of specialized cell types, especially ocular, may explain other genes in our set. Still, many of the genes we identified are known to serve vital functions in the RPE, as demonstrated by pathway enrichment for pigment synthesis and visual processes. We also identified 30 enriched lncRNAs, a class of transcripts not included in previous signature gene sets. The most highly expressed lncRNA in our list, RMRP, is critical for proper mitochondrial DNA replication and OXPHOS complex assembly in HeLa cells70, but its role in the RPE has not yet been investigated. RPE-enriched genes whose functions have not been studied in the tissue afford opportunities for advancing understanding of this important epithelial layer.
Our findings have potential implications for phenotypic variability in monogenic ocular diseases. Mutations in all three of the fRPE-selective eGenes cause monogenic eye diseases. For example, heterozygous mutations in the transcription factor CRX cause dominant forms of photoreceptor degeneration, which can exhibit variable age at onset and disease progression among members of the same family4. Genetically encoded variation in the transcript levels of normal or mutant CRX alleles may contribute to such variable expressivity. Indeed, mouse models of CRX-associated retinopathies provide evidence for a threshold effect in which small changes in expression cause large differences in phenotype71. Mutations in MRFP cause extreme hyperopia (farsightedness). Affected individuals usually have two mutant alleles, but inheritance of a lower-expressing normal allele could explain an affected heterozygous individual in a family with otherwise recessive disease5. The substantial number of fRPE eQTLs associated with other ocular diseases (Fig. 3a) supports a contribution of common genetic variants to the widespread phenotypic variability observed in monogenic eye disorders.
Our findings also have implications for complex ocular diseases. Evidence suggests that defects in RPE energy metabolism contribute to the pathogenesis of AMD, the hallmark of which is accumulation of cholesterol rich deposits in and around the RPE72,73. Forcing fRPE cells to rely on oxidation of glutamine, the most abundant free amino acid in blood, caused upregulation of genes involved in the synthesis of cholesterol, monounsaturated and polyunsaturated fatty acids, as well as genes associated with lipid import. Transcripts for three of the upregulated genes (FADS1, FADS2, and ACAT2) are increased in macular but not extramacular RPE from individuals with early-stage AMD74.
Co-localization of the same RDH5 e/sQTL with both AMD and myopia GWAS loci suggests risk mechanisms for these very different complex diseases. The rs3138141/2 minor haplotype confers an elevated risk for AMD42, but is protective for myopia13,43,67. Reduction in RDH5 activity as a risk factor for AMD is consistent with rare RDH5 loss-of-function mutations that cause recessive fundus albipunctatus, which can include macular atrophy75,76. More puzzling is the relationship between lower RDH5 transcript levels (and presumably enzyme activity) and a reduced risk of myopia. RDH5 is best known for its role in the regeneration of 11-cis retinal in the visual cycle, but the enzyme has also been reported to be capable of producing retinoids suitable for retinoic acid signaling77,78. Evidence from animal models implicates retinoic acid in eye growth regulation12, and retinal all-trans retinoic acid levels are elevated in a guinea pig model of myopia79. Thus the same allele, which has risen to substantial frequencies in some populations (0.38 minor allele frequency in South Asians and 0.19 in Europeans https://www.ncbi.nlm.nih.gov/projects/SNP/), may dampen retinoic acid signaling during eye development and growth, and later contribute to chronic photoreceptor dysfunction in older adults.
The eye is a highly specialized organ with limited representation in large-scale functional genomics datasets. Our analysis of genetic variation and metabolic processes in fRPE cells, even with modest sample sizes, expands our ability to map functional variants with potential to contribute to complex and monogenic eye diseases. Future studies with larger sample sizes from geographically diverse populations, and/or targeting other ocular cell types, will likely discover additional e/sQTLs and functional variants involved in genetic eye diseases.
Sample acquisition and cell culture
Primary human fetal RPE (fRPE) lines were isolated from fetal eyes (Advanced Biosciences Resources, Inc., Alameda, CA) by collecting and freezing non-adherent cells cultured in low calcium medium as described22. When needed, fRPE cells were thawed and plated onto 6-well plates in medium as described23 with 15% FBS. The next day, medium was changed to 5% FBS and the cells were allowed to recover for two additional days. Cells were then trypsinized in 0.25% Trypsin-EDTA (Life Technologies Corporation), resuspended in medium with 15% FBS and plated onto human extracellular matrix-coated (BD Biosciences) Corning 12-well transwells (Corning Inc., Corning, NY) at 240 K cells per transwell. The next day medium was changed to 5% FBS. Cells were cultured for at least 10 weeks to become differentiated (transepithelial resistance of >200 Ω * cm2) and highly pigmented. Medium with 5% FBS was changed every 2–3 days. For the galactose and glucose specific culture conditions, differentiated fRPE cells were cultured for 24 h prior to RNA isolation in DMEM medium (Sigma) with 1 mM sodium pyruvate (Sigma), 4 mM l-glutamine (Life Technologies Corporation), 1% Penicillin-Streptomycin (Life Technologies Corporation), and either 10 mM d-(+)-glucose (Sigma) or 10 mM d-(+)-galactose (Sigma)28. The fRPE lines studied here are not available for distribution.
Genotype data and quality control
Microarray library preparation and genotyping
All 24 RPE samples were genotyped on three Illumina Infinium Omni2.5-8 BeadChip using the Infinium LCG Assay workflow (https://www.illumina.com/products/by-type/microarray-kits/infinium-omni25-8.html). A total of 200 ng of genomic DNA was extracted and amplified to generate sufficient quantity of each individual DNA sample. The amplified DNA samples were fragmented and hybridized overnight on the Omni2.5-8 BeadChip. The loaded BeadChips went through single-base extension and staining, and were imaged on the iScan machine to obtain genotyping information. Genotyping data were exported from Illumina GenomeStudio to ped and map pairs, merged, and converted to the VCF format using PLINK v1.980. We removed variants that were missing in more than 5% of samples.
Variant annotation
We annotated variants using genomic features (including downstream-gene variant, exonic variant, intronic variant, missense variant, splice-acceptor variant, splice-donor variant, splice-region variant, synonymous variant, upstream-gene variant, 3′-UTR variant, 5′-UTR variant), loss-of-function, and nonsense-mediated decay predictions, and clinical databases (including ClinVar, OMIM and OrphanNet) using SnpEff v4.3i81.
Imputation and phasing
We used Beagle v4.182 to perform genotype imputation and phasing. Genotypes were imputed and phased with 1000 Genomes Project phase 3 reference panel. Before imputation and phasing, we filtered the original VCF file to only bi-allelic SNP sites on autosomes and removed sites with more than 5% missing genotypes. We also re-coded the VCF file based on the reference and alternative allele designation of 1000 Genomes Project phase 3 reference panel using the conform-gt program which was provided with the Beagle software.
Prior to imputation, we performed standard pre-imputation QC by removing variants that are missing in more than 5% of samples and used the filtered call set as input to Beagle. After imputation and phasing, we removed variants with allelic r2 < 0.8 (higher allelic r2 indicates higher confidence). Standard post-imputation QC also requires removing variants with low Hardy–Weinberg Equilibrium (HWE) p-value. Due to the extensive admixture in our study cohort, we reasoned that HWE may have trouble in distinguishing genotyping error from admixture. We opted to not apply HWE but instead to remove multi-allelic variants (variants with more than two alleles). The Ts/Tv ratio of the filtered call set was above 2.0 for all chromosomes but chromosome 8 (=1.98) and 16 (=1.93). Several chromosomes have Ts/Tv ratio greater than 2.1, indicating that they are enriched with known (vs. novel) variants (Supplementary Fig. 10). To detect sample duplication, we plotted genotype correlation across every pair of samples and found two samples (sample 3 and 5) were duplicates of each other (Supplementary Fig. 11). We removed one sample (sample 5) at random. The filtered VCF files were used for downstream analysis.
Sex determination
We determined biological sex of each donor using genotyping information. We extracted the genotype dosage with bcftools and calculated the proportion of heterozygous SNPs (heterozygous: dosage = 1; homozygous: dosage = 0 or 2) for chromosome 1 and X. Donors with low heterozygosity on chromosome X (proportion of heterozygous ≈ 0) SNP were defined as males. Chromosome 1 was used as control to establish the baseline for heterozygosity. This cohort has 11 male and 13 female individuals (Supplementary Fig. 12).
Ancestry determination
We determined ancestry of each donor using genotype information. We extracted genotype dosages of chromosome 1 for 23 RPE samples and 4 samples from each of the 26 populations (for a total of 104 individuals) in 1000 Genomes phase 3 version 5 dataset25. We calculated the principal components using the prcomp function in R. The top three principal components explained the most variability (Supplementary Fig. 13), and were used for downstream analysis. The first two principal components clearly separates the European, African, and Asian populations. Four RPE samples are European, and the rest are admixed. Most admixed individuals are African American (Supplementary Fig. 1).
Transcriptomic data and quality control
RNA-seq library preparation and sequencing
RNA was extracted using TRIzol Reagent (Invitrogen) per manufacturer instructions. RNA sequencing was performed on all samples with an RNA integrity number (RIN) of 8.0 or higher and with at least 500 ng total RNA. Stranded, poly-A+ selected RNA-seq libraries were generated using the Illumina TruSeq Stranded mRNA protocol. We performed 75 bp paired-end RNA sequencing on an Illumina NextSeq 500 on all RPE samples (Supplementary Data 2). Glucose and galactose samples from each line were sequenced together to minimize batch effects.
RNA sequencing read mapping
Raw data was de-multiplexed using bcl2fastq2 from Illumina with default parameters. Reads were aligned against the hg19 human reference genome with STAR (v2.4.2a)83 using GENCODE v19 annotations84 and otherwise default parameters. After alignment, duplicate reads were marked using Picard MarkDuplicates (v2.0.1) and reads marked as duplicates or with non-perfect mapping qualities were removed.
Gene and splicing event quantification
We used HTSeq v0.6.085 to count the number of reads overlapping each gene based on the GENCODE v19 annotation. We counted reads on the reverse strand (ideal for Illumina's TruSeq library), required a minimum alignment quality of 10, but otherwise used default parameters. We also quantified RPKM using RNA-SeQC v1.1.886 using hg19 reference genome and GENCODE v19 annotation with flags "-noDoC -strictMode" but otherwise default parameters. We quantified allele-specific expression using the createASVCF.sh script from RASQUAL45 with default parameters. For splicing quantification, we used LeafCutter59 to determine intron excision levels with default parameters. Briefly, we first converted bam files to splice junction counts (bam2junc.sh) and clustered introns based on sharing of splice donor or acceptor sites (leafcutter cluster.py). For each cluster, We required a minimum number of 30 reads, and a minimum fraction of 0.1% in support of each junction. We required each intron must not exceed 100 kbp.
We profiled the RNA-seq library to a median depth of 52.7 million reads (interquartile range: 45.5–60.1 million reads), for a total of 2.5 billion reads (Supplementary Fig. 14a). We checked the number of uniquely mapped reads to ensure sufficient number of mapped reads. The RNA-seq libraries have a median number of 46.8 million (88.8%) uniquely mapped reads, with an interquartile range of 41.0–55.2 million reads (Supplementary Fig. 14b). We ran VerifyBamID87 with parameters "—ignoreRG—best" on RNA-seq BAM files using genotype VCF files as reference and did not find any sample swaps.
Normalization of quantifications
We extract hidden factors from RNA sequencing data using surrogate variable analysis (sva)88 jointly (protecting the treatment variable) and separately for glucose and galactose-treated samples. Prior to estimating hidden factors, the raw count gene expression data was library size corrected, variance stabilized, and log2 transformed using the R package DESeq234. Genes with average read count below 10 and with zero counts in more that 20% of samples were considered not expressed and filtered (to remove tails). A total of 15,056 and 15,062 expressed genes remained for glucose and galactose, respectively. Since library size correct depends on all genes, filtered genes were again corrected for library size, variance stabilized, and log2 transformed using DESeq2 before being used as input to sva. We ran sva, as implemented in the sva R package, with default parameters, and obtained seven significant surrogate variables with joint analysis and four and five significant surrogate variables for the glucose and galactose condition, respectively. We also extracted surrogate variables for splicing level quantification. The joint, glucose, and galactose analysis returned four, two, and two factors, respectively.
Correlation between known and hidden confounders
We calculated the correlation between known (treatment, RIN, sequencing batch, sex, and ancestry) and hidden (surrogate variables) factors to determine which factors to include in downstream analyses. The jointly inferred factors (seven in total) captures treatment (factor 5, r = 0.91), RIN (factor 1, r = 0.71) and batch effect (factor 7, r = −0.56), but does not capture sex (best r = −0.18) or ancestry (best r = −0.34). This agrees with the intuition that treatment, RIN and batch effect have broad influences on gene expression measurement, while sex and ancestry only influence a small set of relevant genes (Supplementary Fig. 15a). To reduce the correlation between factor 5 and treatment, we ran supervised sva protecting the treatment effect. Even with protection on treatment, factor 5 remains correlated (r = −0.62), likely due to the strong and broad effect exerted by metabolic perturbation (Supplementary Fig. 15b). The glucose surrogate variables captured RIN (factor 1, r = 0.81) and batch (factor 2, r = 0.78), and the galactose surrogate variables captured RIN (factor 1, r = −0.61) and batch (factor 1 and 4, r = −0.54 and −0.5, respectively). None of the glucose or galactose surrogate variables captured sex or ancestry (Supplementary Figs. 15c, d). We also compared surrogate variables from splicing quantification. Without protecting the treatment, surrogate variable 3 from the joint set correlated with the treatment (Supplementary Fig. 16a). Even after protecting the treatment, surrogate variable 1 from the joint set correlated with the treatment (Supplementary Fig. 16b), similar to what was observed for expression surrogate variables. Surrogate variables from the glucose and galactose condition correlated strongly with RIN (r = −0.74 and −0.76, respectively, Supplementary Fig. 16c, d).
External datasets
RNA-seq and eQTL datasets
We used GTEx V718 as a reference dataset to perform RPE-selective gene and RPE-specific eQTL analyses. The GTEx V7 dataset collected 53 tissues across 714 donors. All tissues across all donors were used in RPE-selective gene analysis. Among the 53 tissues, 48 tissues have sufficient sample size to perform eQTL analysis and were used for RPE-specific eQTL calling.
GWAS datasets
We used two well-powered ocular disorder GWAS datasets to perform colocalization analyses. The AMD study42 is a meta-analysis across 26 studies and identified 52 independent GWAS signals, including 16 novel loci. The myopia GWAS was part of a 42-trait GWAS collection aimed at finding shared genetic influences across different traits43.
Ocular disease genes dataset
We used ocular disease genes from the Genetic Eye Disease (GEDi) test panel41, which encompasses 257 genes in total including known inherited retinal disorder genes (IRD, n = 214), glaucoma and optic atrophy genes (n = 8), candidate IRD genes (n = 24), age-related macular degeneration risk genes (n = 9), and a non-syndromic hearing loss gene (n = 1).
RPE-selective gene and pathway enrichment analyses
Expression z-score method
To identify RPE-selective genes (high expression in RPE relative to other tissues), we inferred expression specificity using the following procedure.
Calculate the median expression level (x) across all individuals for each tissue.
Calculate the mean (μ) and standard deviation (σ) of median expression values across tissues.
Derive a z-score for each tissue as follows: z = (x−μ)/σ.
Define a gene to be tissue-selective if its z-score is greater than 4.
We filtered out genes on sex and mitochondrial chromosomes, and further filtered out genes in HLA region due to low mappability. To determine whether technical confounders (such as batch effect) affected RPE z-scores, we used a QQ-plot to visualize the z-score of each tissue against the average z-score across tissues. To calculate the average z-scores, we ranked genes within each tissue and take the average z-score for genes with the same rank across tissues. The average z-scores represent the expected distribution. If the z-score distribution from a tissue is markedly different from the expected distribution, this distribution will separate from the diagonal on the QQ plot. Supplementary Fig. 17 shows that RPE z-scores situate within the midst of z-scores from GTEx tissues. In fact, the only outlier is testis, which is a known outlier from previous studies.
Pathway enrichment of RPE-selective genes
To identify coordinated actions by RPE-selective transcriptomic elements, we performed GSEA32 using z-scores as input against GO gene sets from the Molecular Signature Database89 with 10,000 permutations and otherwise default parameters. The full results are in Supplementary Data 4.
Differential expression and pathway enrichment analyses
Differential expression analysis
We performed differential expression analysis with DESeq234 to detect genes whose expression levels were affected by metabolic perturbation. Due to the correlation between hidden factors (SVs) and treatment, we decided to use known factors in the DESeq2 model. More specifically, we observed moderate correlation between SVs and condition (r = −0.62) even after protecting for condition (Supplementary Fig. 15). Adding SVs that are correlated with conditions will bias the estimates on the coefficient for conditions. The model is shown below:
$${E({\mathrm{expression}}) = \beta _0 + \beta _t \cdot {\mathrm{treatment}} + \beta _s \cdot {\mathrm{sex}} + \mathop {\sum }\limits_{i = 1}^3 \beta _{a,i} \cdot {\mathrm{PC}}_i + \beta _r \cdot {\mathrm{RIN}} + \beta _b \cdot {\mathrm{batch}}}$$
Pathway enrichment analysis
We performed pathway enrichment analysis with GSEA using a ranked gene list with 10,000 permutations but otherwise default parameters. The ranking metric was calculated by multiplying the −log10(FDR) by the sign of the effect size from DESeq2. For the pathway database, we used a subset of the Molecular Signatures Database composed of its Hallmark, Biocarta, Reactome, KEGG and GO gene sets89.
fRPE-selective genes in ocular diseases
RPE-selective expression in ocular disease genes
We stratified all protein-coding genes into two groups: (1) ocular disease genes (n = 257) and (2) non-ocular disease genes (n = 18,477). To determine whether known ocular disease genes have elevated expression in fRPE, we compared the expression specificity z-score distribution (defined previously) across these two groups with a two-sided t-test. We performed the same analysis for all GTEx tissues as a benchmark. As a control, we repeated this analysis using known epilepsy genes (n = 189) curated from the Invitae epilepsy gene test panel (https://www.invitae.com/en/physician/tests/03401/).
RPE-selective expression in ocular disease GWA studies
GWAS risk loci are frequently enriched around causal genes, which have elevated expression in relevant tissues90. To determine whether variants around RPE-selective genes explain higher disease heritability than expected by chance, we performed stratified LD score regression on tissue-selective genes using a previously established pipeline44. Since LD score regression operates on a variant level, we assigned variants within 1-kb around any exon of tissue-selective genes to each tissue. Although many variants show long-range interaction, we restricted our analysis to a conservative window size to capture only nearby cis-effects. We performed LD score regression on the 200, 500, and 1000 tissue-specific genes (Fig. 3 and Supplementary Fig. 3).
eQTL mapping and quality control
Covariate selection
We determined biological sex, genomic ancestry, and hidden confounders as described in previous sections. We performed covariate selection by empirically maximizing the power to detect eQTL. We randomly selected 50 genes from chromosome 22 to perform covariate selection for computational feasibility and to avoid overfitting. We added sex, genotype principal components (maximum of three), and surrogate variables sequentially. We chose not to include batch effect or RIN because they were well represented by surrogate variables. We tested the top three genotype principal components because they explained most of the variability in the genotyping data (Supplementary Fig. 13). After multiple hypothesis correction, the number of eAssociations (defined as a SNP-gene pair that passed hierarchical multiple hypothesis testing by TreeQTL46) increased monotonically for both glucose and galactose conditions as the number of covariates increased (Supplementary Fig. 18), which agrees with our intuition that sva only returns significant and independent surrogate variables. Therefore, we decided to use sex, top three genotype principal components and all surrogate variables (four and five for glucose and galactose conditions, respectively).
Per-treatment eQTL calling
We mapped eQTL using RASQUAL45, which integrates total read count with allele-specific expression (ASE) to boost power for eQTL mapping. To obtain GC-corrected library size, We first calculated GC content using GENCODE v1984 by taking the average GC content of all exons of a given gene. Next, we calculated GC-corrected library sizes were calculated based on read count output from HTSeq v0.6.085. We used sex, ancestry principal components, and all surrogate variables. Mathematically, the model is the following:
$$E\left( {{\mathrm{expression}}} \right) = \beta _0 + \beta _{\mathrm{g}} \cdot {\mathrm{genotype}} + \beta _{\mathrm{s}} \cdot {\mathrm{sex}} + \mathop {\sum }\limits_{i = 1}^3 \beta _{a,i} \cdot {\mathrm{PC}} + \mathop {\sum }\limits_{i = 1}^n \beta _{{\mathrm{s}},i} \cdot {\mathrm{SV}}$$
where e stands for expressions, g stands for genotypes, PC stands for genotype principal components, SV stands for surrogate variables, and n = 4 and 5 for glucose and galactose conditions, respectively. We obtained gene-level and association-level FDR using a hierarchical hypothesis correction procedure implemented in TreeQTL46. TreeQTL uses a hierarchical FDR correction procedure, which performs FDR correction first on the gene level, and then on the association level (gene by SNP). We used FDR < 0.05 on both gene and association levels.
eQTL quality control
We determined whether the p-values were inflated (e.g., due to model mis-specification) by visualizing their distribution. The distribution suggests that the p-values are slightly conservative. The spike around zero and upward trend in the QQ plot shows clear enrichment for significant eQTL (Supplementary Fig. 19a, b). As expected, eQTLs with low p-values were enriched around transcription start sites (Supplementary Fig. 19c, d).
Differential eQTL calling with TreeQTL
We performed multi-tissue eQTL calling, using the RASQUAL p-values and the multi-tissue version of TreeQTL46. We set the gene as the first level, the treatment as the second level, and the gene-treatment-SNP as the third level and used the default FDR < 0.05 cutoff for all three levels. We showed a comparison of −log10(p-value) across two metabolic conditions in Supplementary Fig. 4, in which the top five treatment-specific and shared eQTL are labeled. We ranked the differential eQTL result in the order of decreasing δ|π − 0.5| (difference in allelic imbalance in two conditions). More specifically, π denotes the allelic ratio (alternative allele/reference allele), and |π − 0.5| denotes the allelic imbalance. The difference in allelic imbalance, δ|π − 0.5|, defines the change in eQTL effect size across two conditions. We show the allelic ratios in Supplementary Fig. 5.
RPE-selective eQTL
We compared RPE and GTEx eGenes with a two-step FDR approach as described previously56. In brief, eGenes shared across both conditions in fRPE were selected (FDR < 0.05). We decided to filter for shared eGenes because they likely reflect regulatory effects not due to treatments. For each eGene, we screened all GTEx tissues for association at a relaxed FDR < 0.1 and defined an eGene as RPE-selective if no significant association were found in GTEx. Note that such strategy is conservative on two levels. First, by selecting shared eQTL in RPE, these eQTL must pass FDR < 0.05 in both treatments. Second, GTEx FDR corrections were performed tissue-by-tissue, and per-tissue FDR is anti-conservative. We also compared fRPE eGenes to retinal eGenes. The EyeGEx dataset used 406 samples to map eQTLs in the retina and found a total of 10,463 eGenes. We again selected eGenes found in both glucose and galactose conditions (n = 687) and grouped them into fRPE-specific and EyeGEx-shared if they were also eGenes in the EyeGEx dataset.
Motif enrichment in treatment-specific eQTLs
In order to find motifs enriched around treatment-specific eQTLs, we first selected the lead eQTLs from either condition and extracted the 15 bp flanking the lead SNP as the target sequences. To obtain matched background sequence, we flipped the eQTL SNP to its alternative allele as the background. To keep the direction of effect consistent, we always used the expression-increasing allele as the target and the expression-decreasing allele as the background. The target and background sequences were used as input to HOMER to identify enriched motifs.
sQTL calling and quality control
We performed covariate selection by empirically maximizing the power to detect sQTL. We used intron clusters only from chromosome 1 to avoid overfitting, and tested only the top three genotype principal components because they explained most of the variability in the genotyping data (Supplementary Fig. 13). FastQTL were run in permutation mode (adaptively permute 100–10,000 times) to obtain intron-level sQTL p-values. After multiple hypothesis correction, the number of significant sQTL cluster decreased as the number of covariates increased (Supplementary Fig. 20). This is likely because LeafCutter uses the ratio between each intron and its intron cluster as the phenotype. Suppose a batch effect influences the expression of a gene. Such batch effect will influence the quantification of each intron in the same direction. Taking the ratio between a intron and its intron cluster effectively cancels out the batch effect.
Per-treatment sQTL calling
We mapped sQTLs separately for two conditions using FastQTL60 in both nominal and permutation modes and used a simple linear regression:
$$E({\mathrm{intron}}) = \beta _0 + \beta _{\mathrm{g}} \cdot {\mathrm{genotype}}$$
where s stands for the ratio between reads overlapping each intron and the total number of reads over-lapping the intron cluster, g stands for genotypes. To obtain cluster-level p-values, we used a conservative approach to correct for family-wise error rate with the Bonferroni procedure across introns within each cluster. Global FDR estimates were calculated using the lowest Bonferroni adjusted p-values per cluster. We used FDR < 0.05 as a significance cutoff.
sQTL quality control
As a quality control, we determined whether the p-values were inflated by visualizing their distribution. The p-values showed a uniform distribution with a spike near 0 (Supplementary Fig. 21a). The upward trend in the QQ plot shows clear enrichment for significant eQTL (Supplementary Fig. 21b). Further, sQTLs with low p-values were enriched around splicing donor and acceptor sites (Supplementary Fig. 21c), and intronic sQTL SNPs were enriched at intron boundaries (Supplementary Fig. 21d).
Fine-mapping of polygenic ocular disease risk loci
We used fRPE eQTL and sQTL information to identify potential causal genes in two well-powered GWAS on age-related macular degeneration and myopia using a modified version of eCAVIAR65. For every significant eQTL, we tested all variants within 500-kb of the lead eQTL SNP for colocalization with GWAS summary statistics. At each candidate locus, we ran FINEMAP91 twice to compute the posterior probability that each individual SNP at the locus was a causal SNP for the GWAS phenotype and fRPE e/sQTLs. We then processed the FINEMAP results to compute a colocalization posterior probability (CLPP) using the method described by eCAVIAR65. We defined any locus with CLPP > 0.01 to have sufficient evidence for colocalization. At loci that showed colocalization between RPE eQTLs and GWAS associations, we performed the colocalization tests again using eQTLs from each of 44 GTEx tissues. To determine whether any potential causal genes act primarily through fRPE, we repeated colocalization analysis with GTEx eQTLs (Supplementary Figs. 22 and 23). To identify condition-specific colocalization, we ran eCAVIAR separately for the glucose and galactose conditions. A condition-specific colocalization is defined as having a CLPP > 0.01 in one condition, and at least an order of magnitude lower in the other condition with a CLPP < 0.01.
Estimation of isoform proportions
To estimate the proportions of the normal and mis-spliced isoform (exon-3 skipped isoform), we solved a system of equations based on the following observations:
The "C" haplotype produces approximately 3 times as much as the "a" haplotype.
The mis-spliced isoform accounts for approximately 1% of expression in individuals with CC genotype.
The mis-spliced isoform accounts for approximately 4% of expression in individuals with Ca genotype.
We use nc and na to denote the proportion of normal isoform for the "C" and the "a" haplotypes, and pn and pm to denote the proportion of normal and mis-spliced isoforms that pass non-sense mediated decay (not degraded). For simplicity, we assume that pn = 1 because normal isoform should not be degraded by NMD. We use ct, cn, and cm to denote the total, normal, and mis-spliced isoforms for the "C" haplotype, and at, an, and am to denote the total, normal, and mis-spliced isoforms for the "a" haplotype. We know that:
$$c_{\mathrm{n}} = 100c_{\mathrm{m}}$$
$$c_{\mathrm{n}} + a_{\mathrm{n}} = 25\left( {c_{\mathrm{m}} + a_{\mathrm{m}}} \right)$$
$$c_{\mathrm{t}} = 3a_{\mathrm{t}}$$
Plugging in cn = nc, cm = (1 − nc)pm, an = na and am = (1 − na)pm:
$$n_{\mathrm{c}} = 100\left( {1 - n_{\mathrm{c}}} \right)p_{\mathrm{m}}$$
$$n_{\mathrm{c}} + n_{\mathrm{a}} = 25\left( {\left( {1 - n_{\mathrm{c}}} \right)p_{\mathrm{m}} + \left( {1 - n_{\mathrm{a}}} \right)p_{\mathrm{m}}} \right)$$
$$n_{\mathrm{c}} + \left( {1 - n_{\mathrm{c}}} \right)p_{\mathrm{m}} = 3\left( {n_{\mathrm{a}} + \left( {1 - n_{\mathrm{a}}} \right)p_{\mathrm{m}}} \right)$$
Solving the system of equations leads to nc = 0.82, na = 0.25, and pm = 0.05. In other words, 82% and 25% of isoforms transcribed from the "C" and "a" haplotypes are normal, respectively. We estimate that NMD will degrade 95% of mis-spliced isoforms.
ARPE-19 cells were obtained from ATCC (CRL-2302). The cells were obtained directly from ATCC within the past year. They exhibit the expected cobblestone morphology and slight pigmentation when differentiated by standard protocols. ARPE-19 cells were fixed, stained with DAPI and imaged by fluorescence microscopy. No evidence of mycoplasma contamination was seen. ARPE-19 cells were differentiated for 3 months in 6-well plates (Corning) in medium containing 3 mM pyruvate92 and treated with 100 µg/mL cycloheximide (CHX; Sigma) or vehicle (DMSO) for 3 h. Cells were then collected, RNA was extracted by TRIzol (Invitrogen), and cDNA was synthesized with an iScript™ cDNA Synthesis Kit (Bio-RAD). Oligonucleotide primers were designed to specifically amplify the normal or mis-spliced isoforms of the RDH5 transcript. For the normal isoform, the forward primer (ggggctactgtgtctccaaa) was located in exon 3 and the reverse primer (tgcagggttttctccagact) was located in exon 4, with an expected product size of 151 bp. The amplification conditions were: 94 °C 2 min followed by 38 cycles of 94 °C 30 s, 60 °C 30 s, 72 °C 15 s. For the mis-spliced isoform, the forward primer (gatgcacgttaaggaagcag/gcg) spanned the exon 2/4 junction with the three bases at the 3′ end located in exon 4. The reverse primer (gcgctgttgcattttcaggt) was located in exon 5. The expected product size is 204 bp. The amplification conditions were: 94 °C 2 min followed by 50 cycles of 94 °C 30 s, 60 °C 30 s, 72 °C 15 s. AmpliTaq (ThermoFisher) and 2.5 mM MgCl2 were used for all reactions. The identities of the normal and mis-spliced PCR products were confirmed by Sanger sequencing. For quantification, PCR products were resolved on 2% agarose gels containing ethidium bromide and imaged using a Bio-Rad ChemiDoc Touch Imaging System. Equal-sized boxes were drawn around bands for the CHX and DMSO samples, grayscale values were measured by ImageJ (NIH), and the relative fold change was calculated (mean ± SEM; three independent experiments). A one-sided Students t-test was used to assess the statistical significance of a model under which CHX increased product abundance.
Statistics and reproducibility
To promote the reproducibility of our study, we deposited raw experimental data to GEO (see Data availability) and open sourced all scripts for data processing and analysis (see Code availability).
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
All relevant data are available in the Supplementary Data files (Supplementary Data 1–12). Full eQTL and sQTL summary statistics have been made deposited into Box: https://stanford.box.com/s/asrxy0o66xxe1j7mfj56p3z3d405gijj and are available at http://montgomerylab.stanford.edu/resources.html. RNAseq data can be downloaded via GEO accession number GSE129479. Source data underlying the figures is available as Supplementary Data 13–30.
Code availability
Code to reproduce all analyses in this manuscript has been deposited on GitHub: https://github.com/boxiangliu/rpe.
McKusick, V. A. Mendelian inheritance in man and its online version, OMIM. Am. J. Hum. Genet. 80, 588–604 (2007).
Boon, C. J. F. et al. The spectrum of retinal dystrophies caused by mutations in the peripherin/RDS gene. Prog. Retin. Eye Res. 27, 213–235 (2008).
Nash, B. M., Wright, D. C., Grigg, J. R., Bennetts, B. & Jamieson, R. V. Retinal dystrophies, genomic applications in diagnosis and prospects for therapy. Transl. Pediatr. 4, 139–163 (2015).
Paunescu, K., Preising, M. N., Janke, B., Wissinger, B. & Lorenz, B. Genotype–phenotype correlation in a German family with a novel complex CRX mutation extending the open reading frame. Ophthalmology 114, 1348–1357.e1341 (2007).
Sundin, O. H. et al. Extreme hyperopia is the result of null mutations in MFRP, which encodes a Frizzled-related protein. Proc. Natl Acad. Sci. 102, 9553–9558 (2005).
Vaclavik, V., Gaillard, M. C., Tiab, L., Schorderet, D. F. & Munier, F. L. Variable phenotypic expressivity in a Swiss family with autosomal dominant retinitis pigmentosa due to a T494M mutation in the PRPF3 gene. Mol. Vis. 16, 467–475 (2010).
Sergouniotis, P. I. et al. Phenotypic variability in RDH5 retinopathy (Fundus Albipunctatus). Ophthalmology 118, 1661–1670 (2011).
Llavona, P. et al. Allelic expression imbalance in the human retinal transcriptome and potential impact on inherited retinal diseases. Genes 8, 283 (2017).
PubMed Central Google Scholar
MacArthur, J. et al. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog). Nucleic Acids Res. 45, D896–D901 (2017).
Bressler, N. M. Age-related macular degeneration is the leading cause of blindness. JAMA 291, 1900–1901 (2004).
Swaroop, A., Chew, E. Y., Bowes Rickman, C. & Abecasis, G. R. Unraveling a multifactorial late-onset disease: from genetic susceptibility to disease mechanisms for age-related macular degeneration. Annu. Rev. Genom. Hum. Genet. 10, 19–43 (2009).
Zhang, Y. & Wildsoet, C. F. RPE and choroid mechanisms underlying ocular growth and myopia. Prog. Mol. Biol. Transl. Sci. 134, 221–240 (2015).
Tedja, M. S. et al. Genome-wide association meta-analysis highlights light-induced signaling as a driver for refractive error. Nat. Genet. 50, 834–848 (2018).
Holden, B. A. et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology 123, 1036–1042 (2016).
Nicolae, D. L. et al. Trait-associated SNPs are more likely to be eQTLs: annotation to enhance discovery from GWAS. PLoS Genet. 6, e1000888 (2010).
Gusev, A. et al. Partitioning heritability of regulatory and cell-type-specific variants across 11 common diseases. Am. J. Hum. Genet. 95, 535–552 (2014).
Nica, A. C. & Dermitzakis, E. T. Expression quantitative trait loci: present and future. Philos. Trans. R. Soc. B 368, 20120362–20120362 (2013).
Consortium, G. Genetic effects on gene expression across human tissues. Nature 550, 204–213 (2017).
Raymond, S. M. & Jackson, I. J. The retinal pigmented epithelium is required for development and maintenance of the mouse neural retina. Curr. Biol. 5, 1286–1295 (1995).
Strauss, O. The retinal pigment epithelium in visual function. Physiol. Rev. 85, 845–881 (2005).
Vollrath, D. et al. Tyro3 modulates Mertk-associated retinal degeneration. PLoS Genet. 11, e1005723 (2015).
Hu, J. & Bok, D. Culture of highly differentiated human retinal pigment epithelium for analysis of the polarized uptake, processing, and secretion of retinoids. Methods Mol. Biol. 652, 55–73 (2010).
Maminishkis, A. et al. Confluent monolayers of cultured human fetal retinal pigment epithelium exhibit morphology and physiology of native tissue. Invest. Ophthalmol. Vis. Sci. 47, 3612–3624 (2006).
Browning, B. L. & Browning, S. R. Genotype imputation with millions of reference samples. Am. J. Hum. Genet. 98, 116–126 (2016).
1000 Genomes Project Consortium. et al. A global reference for human genetic variation. Nature 526, 68–74 (2015).
Folmes, C. D. L., Dzeja, P. P., Nelson, T. J. & Terzic, A. Metabolic plasticity in stem cell homeostasis and differentiation. Cell Stem Cell 11, 596–606 (2012).
Terluk, M. R. et al. Investigating mitochondria as a target for treating age-related macular degeneration. J. Neurosci. 35, 7304–7311 (2015).
Gohil, V. M. et al. Nutrient-sensitized screening for drugs that shift energy metabolism from mitochondrial respiration to glycolysis. Nat. Biotechnol. 28, 249–255 (2010).
Bennis, A. et al. Comparison of mouse and human retinal pigment epithelium gene expression profiles: potential implications for age-related macular degeneration. PLoS ONE 10, e0141597 (2015).
Liao, J.-L. et al. Molecular signature of primary retinal pigment epithelium and stem-cell-derived RPE cells. Hum. Mol. Genet. 19, 4229–4238 (2010).
Strunnikova, N. V. et al. Transcriptome analysis and molecular signature of human retinal pigment epithelium. Hum. Mol. Genet. 19, 2468–2486 (2010).
Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl Acad. Sci. 102, 15545–15550 (2005).
Ashburner, M. et al. Gene ontology: tool for the unification of biology. Nat. Genet. 25, 25–29 (2000).
Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014).
Paton, C. M. & Ntambi, J. M. Biochemical and physiological function of stearoyl-CoA desaturase. Am. J. Physiol. 297, E28–E37 (2009).
Samuel, W. et al. Regulation of stearoyl coenzyme A desaturase expression in human retinal pigment epithelial cells by retinoic acid. J. Biol. Chem. 276, 28744–28750 (2001).
Yang, T. et al. Crucial step in cholesterol homeostasis: sterols promote binding of SCAP to INSIG-1, a membrane protein that facilitates retention of SREBPs in ER. Cell 110, 489–500 (2002).
Aledo, R. et al. Genetic basis of mitochondrial HMG-CoA synthase deficiency. Hum. Genet. 109, 19–23 (2001).
Reyes-Reveles, J. et al. Phagocytosis-dependent ketogenesis in retinal pigment epithelium. J. Biol. Chem. 292, 8038–8047 (2017).
Slowikowski, K., Hu, X. & Raychaudhuri, S. SNPsea: an algorithm to identify cell types, tissues and pathways affected by risk loci. Bioinformatics 30, 2496–2497 (2014).
Consugar, M. B. et al. Panel-based genetic diagnostic testing for inherited eye diseases is highly accurate and reproducible, and more sensitive for variant detection, than exome sequencing. Genet. Med. 17, 253–261 (2015).
Fritsche, L. G. et al. A large genome-wide association study of age-related macular degeneration highlights contributions of rare and common variants. Nat. Genet. 48, 134–143 (2015).
Pickrell, J. K. et al. Detection and interpretation of shared genetic influences on 42 human traits. Nat. Genet. 48, 709–717 (2016).
Boyle, E. A., Li, Y. I. & Pritchard, J. K. An expanded view of complex traits: from polygenic to omnigenic. Cell 169, 1177–1186 (2017).
Kumasaka, N., Knights, A. J. & Gaffney, D. J. Fine-mapping cellular QTLs with RASQUAL and ATAC-seq. Nat. Genet. 48, 206–213 (2016).
Peterson, C. B., Bogomolov, M., Benjamini, Y. & Sabatti, C. TreeQTL: hierarchical error control for eQTL findings. Bioinformatics 32, 2556–2558 (2016).
Morimura, H., Saindelle-Ribeaudeau, F., Berson, E. L. & Dryja, T. P. Mutations in RGR, encoding a light-sensitive opsin homologue, in patients with retinitis pigmentosa. Nat. Genet. 23, 393–394 (1999).
Schmitz, G. & Langmann, T. Structure, function and regulation of the ABC1 gene product. Curr. Opin. Lipidol. 12, 129 (2001).
Chen, Y. et al. Common variants near ABCA1 and in PMM2 are associated with primary open-angle glaucoma. Nat. Genet. 46, 1115–1119 (2014).
Luo, H. R., Moreau, G. A., Levin, N. & Moore, M. J. The human Prp8 protein is a component of both U2- and U12-dependent spliceosomes. RNA 5, 893–908 (1999).
Tanackovic, G. et al. PRPF mutations are associated with generalized defects in spliceosome formation and pre-mRNA splicing in patients with retinitis pigmentosa. Hum. Mol. Genet. 20, 2116–2130 (2011).
Farkas, M. H. et al. Mutations in pre-mRNA processing factors 3, 8, and 31 cause dysfunction of the retinal pigment epithelium. Am. J. Pathol. 184, 2641–2652 (2014).
Heinz, S. et al. Simple combinations of lineage-determining transcription factors prime cis-regulatory elements required for macrophage and B cell identities. Mol. Cell 38, 576–589 (2010).
Enzo, E. et al. Aerobic glycolysis tunes YAP/TAZ transcriptional activity. Embo J. 34, 1349–1370 (2015).
Kanska, J. et al. Glucose deprivation elicits phenotypic plasticity via ZEB1-mediated expression of NNMT. Oncotarget 8, 26200–26220 (2017).
Barreiro, L. B. et al. Deciphering the genetic architecture of variation in the immune response to Mycobacterium tuberculosis infection. Proc. Natl Acad. Sci. USA 109, 1204–1209 (2012).
Reinisalo, M., Putula, J., Mannermaa, E., Urtti, A. & Honkakoski, P. Regulation of the human tyrosinase gene in retinal pigment epithelium cells: the significance of transcription factor orthodenticle homeobox 2 and its polymorphic binding site. Mol. Vis. 18, 38–54 (2012).
Ratnapriya, R. et al. Retinal transcriptome and eQTL analyses identify genes associated with age-related macular degeneration. Nat. Genet. https://doi.org/10.1038/s41588-019-0351-9 (2019).
Li, Y. I. et al. Annotation-free quantification of RNA splicing using LeafCutter. Nat. Genet. 50, 151–158 (2018).
Ongen, H., Buil, A., Brown, A. A., Dermitzakis, E. T. & Delaneau, O. Fast and efficient QTL mapper for thousands of molecular phenotypes. Bioinformatics 32, 1479–1485 (2016).
Kelson, T. L., Secor McVoy, J. R. & Rizzo, W. B. Human liver fatty aldehyde dehydrogenase: microsomal localization, purification, and biochemical characterization. Biochim. et. Biophys. Acta 1335, 99–110 (1997).
Nakahara, K. et al. The Sjögren-Larsson syndrome gene encodes a hexadecenal dehydrogenase of the sphingosine 1-phosphate degradation pathway. Mol. Cell 46, 461–471 (2012).
Nilsson, S. E. & Jagell, S. Lipofuscin and melanin content of the retinal pigment epithelium in a case of Sjögren-Larsson syndrome. Br. J. Ophthalmol. 71, 224–226 (1987).
Hanna, R. A., Campbell, R. L. & Davies, P. L. Calcium-bound structure of calpain and its mechanism of inhibition by calpastatin. Nature 456, 409–412 (2008).
Hormozdiari, F. et al. Colocalization of GWAS and eQTL signals detects target genes. Am. J. Hum. Genet. 99, 1245–1260 (2016).
Sahu, B. & Maeda, A. Retinol dehydrogenases regulate vitamin A metabolism for visual function. Nutrients 8, 746 (2016).
Kiefer, A. K. et al. Genome-wide analysis points to roles for extracellular matrix remodeling, the visual cycle, and neuronal development in myopia. PLoS Genet. 9, e1003299 (2013).
Nickless, A., Bailis, J. M. & You, Z. Control of gene expression through the nonsense-mediated RNA decay pathway. Cell Biosci. 7, 26 (2017).
Carter, M. S. et al. A regulatory mechanism that detects premature nonsense codons in T-cell receptor transcripts in vivo is reversed by protein synthesis inhibitors in vitro. J. Biol. Chem. 270, 28995–29003 (1995).
Noh, J. H. et al. HuR and GRSF1 modulate the nuclear export and mitochondrial localization of the lncRNA RMRP. Genes Dev. 30, 1224–1239 (2016).
Ruzycki, P. A., Tran, N. M., Kolesnikov, A. V., Kefalov, V. J. & Chen, S. Graded gene expression changes determine phenotype severity in mouse models of CRX-associated retinopathies. Genome Biol. 16, 114 (2015).
Curcio, C. A. et al. Esterified and unesterified cholesterol in drusen and basal deposits of eyes with age-related maculopathy. Exp. Eye Res. 81, 731–741 (2005).
Pikuleva, I. A. & Curcio, C. A. Cholesterol in the retina: the best is yet to come. Prog. Retin. Eye Res. 41, 64–89 (2014).
Ashikawa, Y. et al. Potential protective function of the sterol regulatory element binding factor 1-fatty acid desaturase 1/2 axis in early-stage age-related macular degeneration. Heliyon 3, e00266 (2017).
Yamamoto, H. et al. Mutations in the gene encoding 11-cis retinol dehydrogenase cause delayed dark adaptation and fundus albipunctatus. Nat. Genet. 22, 188–191 (1999).
Yamamoto, H. et al. A novel RDH5 gene mutation in a patient with fundus albipunctatus presenting with macular atrophy and fading white dots. Am. J. Ophthalmol. 136, 572–574 (2003).
Duester, G. Families of retinoid dehydrogenases regulating vitamin A function. Eur. J. Biochem. 267, 4315–4324 (2001).
Nadauld, L. D. et al. Dual roles for adenomatous polyposis coli in regulating retinoic acid biosynthesis and Wnt during ocular development. Proc. Natl Acad. Sci. 103, 13409–13414 (2006).
McFadden, S. A., Howlett, M. H. C. & Mertz, J. R. Retinoic acid signals the direction of ocular elongation in the guinea pig eye. Vis. Res. 44, 643–653 (2004).
Chang, C. C. et al. Second-generation PLINK: rising to the challenge of larger and richer datasets. GigaScience 4, 559 (2015).
Cingolani, P. et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff. Fly 6, 80–92 (2012).
Browning, B. L. & Yu, Z. Simultaneous genotype calling and haplotype phasing improves genotype accuracy and reduces false-positive associations for genome-wide association studies. Am. J. Hum. Genet. 85, 847–861 (2009).
Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, (bts635–621 (2012).
Harrow, J. et al. GENCODE: the reference human genome annotation for The ENCODE Project. Genome Res. 22, 1760–1774 (2012).
Anders, S., Pyl, P. T. & Huber, W. HTSeq—a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015).
Deluca, D. S. et al. RNA-SeQC: RNA-seq metrics for quality control and process optimization. Bioinformatics 28, 1530–1532 (2012).
Jun, G. et al. Detecting and estimating contamination of human DNA samples in sequencing and array-based genotype data. Am. J. Hum. Genet. 91, 839–848 (2012).
Leek, J. T. & Storey, J. D. Capturing heterogeneity in gene expression studies by surrogate variable analysis. PLoS Genet. 3, 1724–1735 (2007).
Liberzon, A. et al. Molecular signatures database (MSigDB) 3.0. Bioinformatics 27, 1739–1740 (2011).
Finucane, H. K. et al. Heritability enrichment of specifically expressed genes identifies disease-relevant tissues and cell types. Nat. Genet. 50, 621–629 (2018).
Benner, C. et al. FINEMAP: efficient variable selection using summary data from genome-wide association studies. Bioinformatics 32, 1493–1501 (2016).
Samuel, W. et al. Appropriately differentiated ARPE-19 cells regain phenotype and gene expression profiles similar to those of native RPE cells. Mol. Vis. 23, 60–89 (2017).
This work was supported by a Stanford Center for Computational, Evolutionary and Human Genomics Predoctoral Fellowship and the National Key R&D Program of China (2016YFD0400800) (to B.L.); T32EY20485 (to M.A.C.); the Edward Mallinckrodt Jr. Foundation and NIH grants R33HL120757, U01HG009431, R01MH101814, and R01HG008150 (to S.B.M.); The Macular Degeneration Research Program of the BrightFocus Foundation, the Foundation Fighting Blindness, and R01EY025790 (to D.V.); and P30EY026877. D.B. is the Dolly Green Professor of Ophthalmology at UCLA. The authors thank 23andMe employees and participants for providing summary statistics for the myopia GWAS and Baidu USA for providing critical support to B.L.
These authors contributed equally: Boxiang Liu, Melissa A. Calton.
Department of Biology, Stanford University, Stanford, CA, 94305, USA
Boxiang Liu
Department of Genetics, Stanford University School of Medicine, Stanford, CA, 94305, USA
Melissa A. Calton, Nathan S. Abell, Gillie Benchorin, Ming Chen, Stephen B. Montgomery & Douglas Vollrath
Program in Biomedical Informatics, Stanford University School of Medicine, Stanford, 94305, CA, USA
Michael J. Gloudemans
Department of Ophthalmology, Jules Stein Eye Institute, UCLA, Los Angeles, 90095, CA, USA
Jane Hu & Dean Bok
Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
Xin Li, Brunilda Balliu & Stephen B. Montgomery
Melissa A. Calton
Nathan S. Abell
Gillie Benchorin
Ming Chen
Jane Hu
Brunilda Balliu
Dean Bok
Stephen B. Montgomery
Douglas Vollrath
M.A.C., S.B.M. and D.V. conceived and designed experiments. D.B. and J.H. provided critical reagents and expertize. M.A.C., G.B., B.L. and M.C. performed experiments. B.L., N.S.A., B.B., M.J.G., X.L., S.B.M. and D.V. analyzed data. B.L., D.V., S.B.M., M.A.C. and B.B. wrote the paper.
Correspondence to Stephen B. Montgomery or Douglas Vollrath.
Stephen Montgomery is on the Scientific Advisory Board of Prime Genomics. The remaining authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Figures
Description of Additional Supplementary Files
Supplementary Data 1–12
Supplementary Data 13–30
Liu, B., Calton, M.A., Abell, N.S. et al. Genetic analyses of human fetal retinal pigment epithelium gene expression suggest ocular disease mechanisms. Commun Biol 2, 186 (2019). https://doi.org/10.1038/s42003-019-0430-6
Proline metabolism and transport in retinal health and disease
Jianhai Du
Siyan Zhu
Jennifer R. Chao
Amino Acids (2021)
Communications Biology (Commun Biol) ISSN 2399-3642 (online) | CommonCrawl |
Advances in Continuous and Discrete Models
Theory and Modern Applications
Variational principles for spectral analysis of one Sturm-Liouville problem with transmission conditions
Kadriye Aydemir1 &
Oktay Sh Mukhtarov2,3
Advances in Difference Equations volume 2016, Article number: 76 (2016) Cite this article
We study certain spectral aspects of the Sturm-Liouville problem with a finite number of interior singularities. First, for self-adjoint realization of the considered problem, we introduce a new inner product in the direct sum of the \(L_{2}\) spaces of functions defined on each of the separate intervals. Then we define some special solutions and construct the Green function in terms of them. Based on the Green function, we establish an eigenfunction expansion theorem. By applying the obtained results we extend and generalize such important spectral properties as the Parseval and Carleman equations, Rayleigh quotient, and Rayleigh-Ritz formula (minimization principle) for the considered problem.
The Sturm-Liouville differential equations are a class of differential equations often encountered in solving PDEs using the method of separation of variables. Their solutions define many well-known special functions, such as Bessel functions, Legendre polynomials, Chebyshev polynomials, or various hypergeometric functions arising in engineering and science applications. The solutions of many problems in mathematical physics are involved in investigation of a spectral problem, that is, the investigation of the spectrum and the expansion of an arbitrary function in terms of eigenfunctions of a differential operator. The issue of expansion in eigenfunctions is a classical one going back at least to Fourier (see, e.g., [1–4]). The method of Sturm expansions is widely used in calculations of the spectroscopic characteristics of atoms and molecules [5–7]. A relatively recent impact is due to the study of wave propagation in random media [8, 9], where eigenfunction expansions are an important input in the proof of localization. The use of this tool is settled by classical results in the Schrödinger operator case. But with the study of operators related to classical waves [8, 10], a need for more general results on eigenfunction expansion became apparent. An important point is that a general function can be expanded in terms of all the eigenfunctions of an operator, a so-called complete set of functions. That is, if \(f_{n}\) is an eigenfunction of an operator Ψ with eigenvalue \(\mu_{n}\) (so \(\Psi f_{n}=\mu_{n} f_{n}\)), then a general function g can be expressed as the linear combination \(g=c_{1}f_{1}+c_{2}f_{2}+\cdots\) where the \(c_{n}\) are coefficients, and the sum is over a complete set of functions. The advantage of expressing a general function as a linear combination of a set of eigenfunctions is that it allows us to deduce the effect of an operator on a function that is not one of its own eigenfunctions. The importance of Sturm-Liouville problems for spectral methods lies in the fact that the spectral approximation of the solution of a differential equation is usually regarded as a finite expansion of eigenfunctions of a suitable Sturm-Liouville problem. Eigenfunction expansion problems for classical Sturm-Liouville problems have been investigated by many authors (see [1, 2, 11, 12] and references therein). In this paper we investigate certain spectral problems arising in the theory of the convergence of the eigenfunction expansion for one nonclassical eigenvalue problem, which consists of the Sturm-Liouville equation
$$\begin{aligned} \mathcal{L}(y):=-a(x)y^{\prime\prime}(x)+ q(x)y(x)=\lambda y(x) \end{aligned}$$
on a finite number of disjoint intervals \(\Omega=\bigcup_{i=1}^{n+1}(\xi_{i-1}, \xi_{i})\), where \(0=\xi_{0}<\xi_{1}<\cdots<\xi_{n+1}=\pi\), together with boundary conditions (BCs) at the endpoints \(x=0, \pi\)
$$\begin{aligned} &\mathcal{L}_{\alpha}(y):=\alpha_{1} y(0)+ \alpha_{2} y'(0)=0, \end{aligned}$$
$$\begin{aligned} &\mathcal{L}_{\beta}(y):=\beta_{1} y(\pi)+ \beta_{2} y'(\pi)=0 \end{aligned}$$
and transmission conditions at the interior points \(\xi_{k} \in (0,\pi)\), \(k=1,2,\ldots,n\),
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}_{2k-1}(y)={}&\delta'_{2k-1}y'( \xi_{k}+0)+\delta_{2k-1}y(\xi _{k}+0)+ \gamma'_{2k-1}y'(\xi_{k}-0) \\ &{}+\gamma_{2k-1}y(\xi_{k}-0)=0, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \mathcal{L}_{2k}(y)=\delta'_{2k}y'( \xi_{k}+0)+\delta_{2k}y(\xi_{k}+0)+\gamma '_{2k}y'(\xi_{k}-0)+ \gamma_{2k}y(\xi_{k}-0)=0, \end{aligned}$$
where \(a(x)=a_{i}^{2}>0\) for \(x \in\Omega_{i}:= (\xi_{i-1}, \xi_{i})\), \(i=1,2,\ldots,n+1 \), the potential \(q(x)\) is a real-valued function that is continuous in each of the intervals \((\xi_{i-1}, \xi_{i})\) and has finite limits \(q( 0+0)\), \(q( \pi-0)\), and \(q(\xi_{i}\mp0)\), \(i=1,2,\ldots,n \), λ is a complex spectral parameter, and \(\delta_{k}\), \(\delta'_{k}\), \(\gamma_{k}\), and \(\gamma'_{k}\) (\(k=1,2,\ldots,2n\)) are real numbers. The conditions are imposed on the left and right limits of solutions and their derivatives at the interior points and are often called 'transmission conditions' or 'interface conditions.' Such type problems often arise in varies physical transfer problems (see [13]). Some problems with transmission conditions arise in thermal conduction problems for a thin laminated plate (i.e., a plate composed by materials with different characteristics piled in the thickness; see [14]). Similar problems with point interactions are also studied in [15, 16], et cetera. Since the solutions of equation (1) may have discontinuities at the interior points of the interval and since the values of the solutions and their derivatives at the interior points \(\xi_{i}\) are not defined, an important question is how to introduce a new Hilbert space in such a way that the considered problem can be interpreted as a self-adjoint problem in this space. The purpose of this paper is to extend and generalize important spectral properties such as the Rayleigh quotient, eigenfunction expansion, Rayleigh-Ritz formula (minimization principle), Parseval equality, and Carleman equality for Sturm-Liouville problems with interior singularities. The 'Rayleigh quotient' is the basis of an important approximation method that is used in solid mechanics and quantum mechanics. In the latter, it is used in the estimation of energy eigenvalues of nonsolvable quantum systems, for example, many-electron atoms and molecules. We note that spectral problems for ordinary differential operators without singularities were investigated in many works (see the monographs [4, 12, 17–22] and the references therein). Some aspects of spectral problems for differential equations having singularities with classical boundary conditions at the endpoints were studied, among others, in [15, 16, 23–32], where further references can be found.
Some preliminary results in according Hilbert space
We denote by \(\theta_{ijk} \) (\(1\leq j< k \leq4\)) the determinant of the jth and kth columns of the matrix
$$T_{i}= \begin{bmatrix} \delta'_{2i-1} & \delta_{2i-1} & \gamma'_{2i-1} & \gamma_{2i-1}\\ \delta'_{2i} & \delta_{2i} & \gamma'_{2i} & \gamma_{2i} \end{bmatrix}, \quad i=1,2,\ldots,n. $$
Note that throughout this study we shall assume that \(\theta_{ijk}>0\) for all i, j, k. In the direct sum space \(\mathcal{H}= \bigoplus_{i=1}^{n+1}L_{2}(\Omega_{i})\) we define the new inner product associated with the considered BVTP (1)-(5) by
$$\begin{aligned} \langle y,z\rangle_{\mathcal{H}}:= \sum _{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}y(x)\overline{z(x)}\,dx \end{aligned}$$
for \(y= y(x)\), \(z= z(x) \in\mathcal{H}\). Here we let \(\theta_{034}=\theta_{(n+1)12}=1\). Let us introduce the linear operator \((Ay)(x)=-a(x)y^{\prime\prime}(x)+ q(x)y(x)\) in the Hilbert space \(\mathcal{H}\) with domain of definition \(D(\mathcal{A})\) consisting of all functions \(y\in\mathcal{H}\) satisfying the following conditions:
y and \(y'\) are absolutely continuous in each interval \(\Omega_{i}\) (\(i=1,2,\ldots,n+1\)) and has finite limits \(y(\xi_{0}+0)\), \(y'(\xi_{0}+0)\), \(y(\xi_{n+1}-0)\), \(y'(\xi _{n+1}-0)\), \(y(\xi_{k}\mp0)\), and \(y'(\xi_{k}\mp0)\) for \(k=1,2,\ldots,n\);
\(\mathcal{L}y(x) \in \mathcal{H}\), \(\mathcal{L}_{\alpha}y(x)=\mathcal{L}_{\beta}y(x)=\mathcal{L}_{2k-1}y(x)=\mathcal{L}_{2k} y(x)=0\), \(k=1,2,\ldots,n\). Then problem (1)-(5) is reduced to the operator equation \(\mathcal{A}y=\lambda y \) in the Hilbert space \(\mathcal{H}\).
Theorem 2.1
For all \(y, z \in D(\mathcal{A})\), we have the equality \(\langle\mathcal{A}y,z\rangle_{\mathcal{H}}=\langle y,\mathcal{A}z\rangle_{\mathcal{H}} \).
From the definition of Hilbert space \(\mathcal{H}\) it follows that
$$\begin{aligned} \langle\mathcal{A}y,z\rangle_{\mathcal{H}} =& \sum _{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}\mathcal{L}y(x)\overline{z(x)}\,dx \\ =& \sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}y(x)\overline{\mathcal{L}z(x)}\,dx \\ &{} + \theta_{112}\theta_{212}\cdots\theta_{n12} \bigl(W(y, \overline{z};\xi_{1}-)- W(y, \overline{z};0)\bigr) \\ &{}+ \theta_{134}\theta_{212}\cdots\theta_{n12}\bigl( W(y, \overline{z};\xi_{2}-) - W(y, \overline{z};\xi_{1}+) \bigr) \\ &{} +\cdots +\theta_{134}\theta_{234}\cdots\theta_{n34} \bigl(W(y,\overline{z};\pi)- W(y,\overline{z};\xi_{n}+)\bigr) \\ =& \langle y,\mathcal{A}z\rangle + \theta_{112}\theta_{212}\cdots \theta_{n12}\bigl(W(y, \overline{z};\xi_{1}-)- W(y, \overline{z};0)\bigr) \\ &{}+ \theta_{134}\theta_{212}\cdots\theta_{n12}\bigl( W(y, \overline{z};\xi_{2}-) - W(y, \overline{z};\xi_{1}+) \bigr) \\ &{} +\cdots +\theta_{134}\theta_{234}\cdots\theta_{n34} \bigl(W(y,\overline{z};\pi)- W(z,\overline{z};\xi_{n}+)\bigr), \end{aligned}$$
where, as usual, \(W(y, \overline{z};x)\) denotes the Wronskian of the functions y and z̅. From the boundary conditions (2)-(3) it follows that
$$ W(y, \overline{z};0)=0 \quad \mbox{and}\quad W(y, \overline{z}; \pi)=0. $$
The transmission conditions (4)-(5) lead to
$$ \theta_{i12}W(f, \overline{g};\xi_{i}-) = \theta_{i34} W(f, \overline{g};\xi_{i}+), \quad i=1,2, \ldots,n. $$
Substituting (8) and (9) into (7), we obtain the needed equality. □
Lemma 2.2
The linear operator \(\mathcal{A}\) is densely defined in \(\mathcal{H}\).
It suffices to prove that if \(z \in\mathcal{H}\) is orthogonal to all \(y \in D(\mathcal{A})\), then \(z=0\). Suppose that \(\langle y,z\rangle_{\mathcal{H}}=0\) for all \(y \in D(\mathcal{A})\). Denote by \(\bigoplus_{i=1}^{n+1}C_{0}^{\infty}(\Omega_{i})\) the set of all infinitely differentiable functions in Ω vanishing on some neighborhoods of the points \(x=\xi_{k}\), \(k=0,1,2,\ldots,n+1\). Taking into account that \(C_{0}^{\infty}(\xi_{k},\xi_{k+1})\) is dense in \(L_{2}(\xi_{k},\xi_{k+1})\) (\(k=0,1,2,\ldots,n+1 \)), we have that the function \(z(x)\) vanishes on Ω. The proof is complete. □
Corollary 2.3
\(\mathcal{A}\) is symmetric linear operator in the Hilbert space \(\mathcal{H}\).
All eigenvalues of problem (1)-(3) are real, and two eigenfunctions corresponding to the distinct eigenvalues are orthogonal in the sense of the following equality:
$$\begin{aligned} \sum_{k=0}^{n} \frac {1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}y(x)z(x)\,dx=0. \end{aligned}$$
Remark 2.5
In fact, as in our previous work [31], we can prove that the operator \(\mathcal{A}\) is self-adjoint in the Hilbert space \(\mathcal{H}\). Moreover, the resolvent operator \((A-\lambda I)^{-1}\) is compact in this space.
Now we define two solutions \(\upsilon(x,\lambda)\) and \(\vartheta(x,\lambda)\) of equation (1) on the whole \(\Omega=\bigcup_{i=1}^{n+1}(\xi_{i-1}, \xi_{i})\) by \(\upsilon(x,\lambda)=\upsilon_{i}(x,\lambda)\) for \(x \in \Omega_{i}\) and \(\vartheta(x,\lambda )=\vartheta_{i}(x,\lambda)\) for \(x \in\Omega_{i}\) (\(i=1,2,\ldots, {n+1}\)), where \(\upsilon_{i}(x,\lambda)\) and \(\vartheta_{i}(x,\lambda)\) are defined recurrently by the following procedure. Let \(\upsilon_{1}(x,\lambda)\) and \(\vartheta _{n+1}(x,\lambda)\) be solutions of equation (1) on \((0,\xi_{1})\) and \((\xi_{n},\pi)\) satisfying the initial conditions
$$\begin{aligned} y(0,\lambda)=\alpha_{2}, \qquad y'(0, \lambda)=-\alpha_{1} \end{aligned}$$
$$\begin{aligned} y(\pi,\lambda)=-\beta_{2},\qquad y'(\pi, \lambda)=\beta_{1}, \end{aligned}$$
respectively. In terms of these solutions, we define recurrently the other solutions \(\upsilon_{i+1}(x,\lambda)\) and \(\vartheta_{i}(x,\lambda)\) by the initial conditions
$$\begin{aligned}& \upsilon_{i+1}(\xi_{i}+,\lambda) =\frac{1}{\theta_{i12}}\biggl( \theta_{i23}\upsilon _{i}(\xi_{i}-,\lambda)+ \theta_{i24}\frac{\partial\upsilon _{i}(\xi_{i}-,\lambda)}{\partial x}\biggr), \end{aligned}$$
$$\begin{aligned}& \frac{\partial\upsilon_{i+1}(\xi_{i}+,\lambda)}{\partial x} =\frac{-1}{\theta_{i12}}\biggl(\theta_{i13}\upsilon _{i}(\xi_{i}-,\lambda)+\theta_{i14} \frac{\partial\upsilon _{i}(\xi_{i}-,\lambda)}{\partial x}\biggr)\quad\mbox{and} \end{aligned}$$
$$\begin{aligned}& \vartheta_{i}(\xi_{i}-,\lambda) =\frac{-1}{\theta_{i34}}\biggl( \theta_{i14}\vartheta _{i+1}(\xi_{i}+,\lambda)+ \theta_{i24}\frac{\partial\vartheta _{i+1}(\xi_{i}+,\lambda)}{\partial x}\biggr), \end{aligned}$$
$$\begin{aligned}& \frac{\partial\vartheta_{i}(\xi_{i}-,\lambda)}{\partial x}) =\frac{1}{\theta_{i34}}\biggl(\theta_{i13}\vartheta _{i+1}(\xi_{i}+,\lambda)+\theta_{i23} \frac{\partial\vartheta _{i+1}(\xi_{i}+,\lambda)}{\partial x}\biggr), \end{aligned}$$
respectively, where \(i=1,2,\ldots \) . The existence and uniqueness of these solutions follow from the well-known theorem of ordinary differential equation theory. Moreover, by applying the method of [16] we can prove that all these solutions are entire functions of parameter \(\lambda\in\mathbb{C}\) for each fixed x. Taking into account (13)-(16) and the fact that the Wronskians \(\omega_{i}(\lambda):=W[\upsilon_{i}(x,\lambda ),\vartheta_{i}(x,\lambda)]\) (\(i=1,2,\ldots,n+1\)) are independent of the variable x, we have
$$\begin{aligned} \omega_{i+1}(\lambda) =&\upsilon_{i+1}(\xi_{i}+, \lambda )\frac{\partial\vartheta_{i+1}(\xi_{i}+,\lambda)}{\partial x}-\frac {\partial\upsilon_{i+1}(\xi_{i}+,\lambda)}{\partial x}\vartheta_{i+1}(\xi _{i}+,\lambda) \\ =&\frac{\theta_{i34}}{\theta_{i12}}\biggl(\upsilon_{i}(\xi_{i}-,\lambda) \frac {\partial\vartheta_{i}(\xi_{i},\lambda)}{\partial x} -\frac{\partial\upsilon_{i}(\xi_{i}-,\lambda)}{\partial x}\vartheta_{i}(\xi _{i}-,\lambda)\biggr) \\ =&\frac{\theta_{i34}}{\theta_{i12}} \omega_{i}(\lambda )=\prod _{j=1}^{i}\frac{\theta_{j34}}{\theta_{j12}}\omega_{1}( \lambda) \quad (i=1,2,\ldots,n). \end{aligned}$$
It is convenient to define the characteristic function \(\omega(\lambda)\) for our problem (1)-(3) as
$$\omega(\lambda):=\omega_{1}(\lambda) =\prod _{j=1}^{i}\frac{\theta_{j12}}{ \theta_{i34}}\omega_{i+1}( \lambda) \quad(i=1,2,\ldots,n). $$
Obviously, \(\omega(\lambda)\) is an entire function. By applying the technique of [29] we can prove that there are infinitely many eigenvalues \(\lambda_{k}\), \(k=1,2,\ldots \) , of problem (1)-(5), which coincide with the zeros of the characteristic function \(\omega(\lambda)\).
Eigenfunction expansion based on the Green function. Modified Parseval equality
We can show that the Green function for problem (1)-(5) is of the form
$$\begin{aligned} G(x,s;\lambda)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\upsilon(s,\lambda)\vartheta(x,\lambda)}{\omega_{(}\lambda)}, & 0< s \leq x< \pi x,s\neq\xi_{i}, i=1,2,\ldots,n+1, \\ \frac{\upsilon(x,\lambda)\vartheta(s,\lambda)}{\omega_{(}\lambda)}, & 0< x \leq s< \pi x,s\neq\xi_{i}, i=1,2,\ldots,n+1, \end{array}\displaystyle \right . \end{aligned}$$
for \(x, s \in\Omega\) (see, e.g., [26]). It is symmetric with respect to x and s and is real-valued for real λ. Let us show that the function
$$\begin{aligned} y(x,\lambda)=\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}G(x,s;\lambda)f(s)\,ds, \end{aligned}$$
called a resolvent, is a solution of the equation
$$\begin{aligned} a(x)y''+\bigl\{ \lambda-q(x)\bigr\} y=f(x) \end{aligned}$$
(where \(f(x)\neq0\) is a continuous function in each \(\Omega_{i}\) with finite one-hand limits at the endpoints of these intervals) satisfying the boundary-transmission conditions (2)-(5). Without loss of generality, we can assume that \(\lambda=0\) is not an eigenvalue. Otherwise, we take a fixed real number η and consider the boundary-value-transmission problem for the differential equation
$$ a(x)y^{\prime\prime}(x,\lambda)+ \bigl\{ (\lambda+\eta)-q(x)\bigr\} y(x,\lambda)=0 $$
together with the same boundary-transmission conditions (2)-(5) and the same eigenfunctions as for problem (1)-(5). All the eigenvalues are shifted through η to the right. It is evident that η can be selected so that \(\lambda=0\) is not an eigenvalue of the new problem. Let \(G(x,s;0)=G(x,s)\). Then the function
$$\begin{aligned} y(x,\lambda) =& \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}G(x,s)f(s)\,ds \end{aligned}$$
is a solution of the equation \(a(x)y''-q(x)y=f(x)\) satisfying the boundary-transmission conditions (2)-(5). We rewrite (19) in the form
$$\begin{aligned} a(x)y''-q(x)y=f(x)-\lambda y. \end{aligned}$$
Thus, the homogeneous problem (\(f(x)\equiv0\)) is equivalent to the integral equation
$$\begin{aligned} y(x,\lambda)+\lambda \Biggl\{ \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}G(x,s)y(s)\,ds \Biggr\} =0. \end{aligned}$$
Denoting the collection of all the eigenvalues of problem (1)-(4) by \(\lambda_{0}< \lambda_{1}< \lambda_{2}<\cdots<\lambda_{n},\ldots \) and the corresponding normalized eigenfunctions by \(y_{0}, y_{1}, y_{2},\ldots,y_{n},\ldots \) , consider the series
$$Y(x,\xi)=\sum_{n=0}^{\infty}\frac{y_{n}(x)y_{n}(\xi)}{\lambda_{n}}. $$
We can show that \(\lambda_{n}=O(n^{2})\). From this asymptotic formula for the eigenvalues it follows that the series for \(Y(x,\xi)\) converges absolutely and uniformly; therefore, \(Y(x,\xi)\) is continuous in Ω. Consider the kernel
$$K(x,\xi)=G(x,\xi)+ Y(x,\xi)=G(x,\xi)+\sum_{n=0}^{\infty} \frac {y_{n}(x)y_{n}(\xi)}{\lambda_{n}}, $$
which is continuous and symmetric. By a familiar theorem in the theory of integral equations, any symmetric kernel \(K(x,\xi)\) that is not identically zero has at least one eigenfunction [33], that is, there are a number μ and a function \(\psi(x)\neq0\) satisfying the equation
$$\begin{aligned} \psi(x)+\mu\Biggl\{ \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} K(x,\xi)\psi(\xi)\,d\xi\Biggr\} =0 . \end{aligned}$$
Thus, if we show that the kernel \(K(x,\xi)\) has no eigenfunctions, we obtain \(K(x,\xi)\equiv0\), that is,
$$\begin{aligned} G(x,\xi)=-\sum_{n=0}^{\infty} \frac{y_{n}(x)y_{n}(\xi)}{\lambda_{n}}. \end{aligned}$$
It follows from equation (23) that
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,\xi)\psi_{n}(\xi)\,d\xi=-\lambda_{n}^{-1}\psi_{n}(x). \end{aligned}$$
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} K(x,\xi)\psi_{n}(\xi)\,d\xi=0, \end{aligned}$$
that is, the kernel \(K(x,\xi)\) is orthogonal to all eigenfunctions of the boundary-value-transmission problem (1)-(5). Let \(y(x)\) be a solution of the integral equation (24). Let us show that \(y(x)\) is orthogonal to all \(\psi_{n}(x)\). In fact, it follows from (24) that
$$\begin{aligned} \sum_{k=0}^{n}\frac {1}{a_{k+1}^{2}} \prod_{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}y(x)\psi_{n}(x)=0. \end{aligned}$$
$$\begin{aligned} &y(x,\lambda)+\lambda_{0}\Biggl\{ \sum _{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} K(x,\xi)y(\xi)\,d\xi\Biggr\} \\ &\quad=y(x,\lambda)+\lambda_{0}\Biggl\{ \sum _{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,\xi)y(\xi)\,d\xi\Biggr\} =0, \end{aligned}$$
that is, \(y(x,\lambda)\) is an eigenfunction of the boundary-value-transmission problem (1)-(5). Since it is orthogonal to all \(\psi_{n}(x)\), it is also orthogonal to itself, and, as a consequence, \(y(x,\lambda)=0\) and \(K(x,\xi)=0\). Formula (25) is thus proved.
(Expansion theorem)
If \(f(x)\) has a continuous second derivative in each \(\Omega_{i}\) (\(i=1,2,\ldots,n+1\)), and satisfies the boundary-transmission conditions (2)-(5), then \(f(x)\) can be expanded into an absolutely and uniformly convergent series of eigenfunctions of the boundary-value-transmission problem (1)-(5) on Ω, that is,
$$\begin{aligned} f(x)=\sum_{m=0}^{\infty}r_{m} \psi_{m}(x), \end{aligned}$$
where \(r_{m}=r_{m}(f)\) are the Fourier coefficients of f given by
$$\begin{aligned} r_{m}=\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} f(x)\psi_{m}(x)\,dx. \end{aligned}$$
Put \(g(x)=a(x)f''-q(x)f\). Then, relying on (18) and (25), we have
$$\begin{aligned} f(x) =& \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}G(x,\xi)g(\xi)\,d\xi \\ =&-\sum_{m=0}^{\infty}\frac{\psi _{m}(x)}{\lambda_{m}}\sum _{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}\psi_{m}(\xi)g(\xi)\,d\xi \\ \equiv&\sum_{m=0}^{\infty }r_{m} \psi_{m}(x). \end{aligned}$$
From the orthogonality and normalization of the functions \(\psi_{m}(x)\) we obtain (29). □
(Modified Parseval equality)
For any function \(f\in \bigoplus_{i=1}^{n+1}L_{2}(\Omega_{i})\), we have the Parseval equality
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi _{k+1}-0}f^{2}(x)\,dx=\sum _{m=0}^{\infty}r^{2}_{m}(f). \end{aligned}$$
If \(f(x)\) satisfies the conditions of Theorem 3.1, then (31) follows immediately from the uniform convergence of the series (28). Indeed,
Now, suppose that \(f(x)\) is an arbitrary square-integrable function on the intervals \(\Omega_{i}\) (\(i=1,2,\ldots,n+1\)). Slightly modifying the familiar theorem in the theory of real analysis, we can show that there exists a sequence of infinitely differentiable functions \(f_{k}(x)\), converging in mean square to \(f(x)\), such that each function \(f_{k}(x)\) is identically zero in some neighborhoods of the points \(\xi_{i}\) (\(i=0,1,\ldots,n+1\)). From (32) it follows that
$$\begin{aligned} &\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} \bigl[f_{s}(x)-f_{t}(x) \bigr]^{2}\,dx \\ &\quad=\sum_{m=0}^{\infty }\bigl[r_{m}(f_{s})-r_{m}(f_{t}) \bigr]^{2}, \end{aligned}$$
where \(r_{m}(f_{s})\) are, as usual, the Fourier coefficients in (29). Since the left-hand side (33) tends to zero as \(s,t \rightarrow\infty\), the right-hand side also tends to zero. By applying the Cauchy-Schwarz inequality we obtain
$$\begin{aligned} \bigl| r_{m}(f)-r_{m}(f_{s})\bigr| \leq \Biggl\{ \sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} \bigl[f(x)-f_{s}(x) \bigr]^{2}\,dx\Biggr\} ^{\frac{1}{2}}. \end{aligned}$$
On the other hand, from the convergence in the mean of \(f_{s}(x)\) to \(f(x)\) it follows that
$$\begin{aligned} \lim_{s\rightarrow\infty}r_{m}(f_{s})= r_{m}(f), \quad m=0,1,2,\ldots. \end{aligned}$$
It follows from (33) that
$$\begin{aligned} \sum_{n=0}^{N} \bigl[r_{m}(f_{s})-r_{m}(f_{t}) \bigr]^{2} \leq\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} \bigl[f_{s}(x)-f_{t}(x) \bigr]^{2}\,dx \end{aligned}$$
for an arbitrary integer N. Passing to the limit as \(s\rightarrow \infty\), we obtain
$$\begin{aligned} \sum_{n=0}^{N} \bigl[r_{m}(f)-r_{m}(f_{t})\bigr]^{2} \leq\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} \bigl[f(x)-f_{t}(x) \bigr]^{2}\,dx. \end{aligned}$$
Now letting \(N\rightarrow\infty\) gives
$$\begin{aligned} \sum_{n=0}^{\infty} \bigl[r_{m}(f)-r_{m}(f_{t})\bigr]^{2} \leq\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+}^{\xi_{k+1}-} \bigl[f(x)-f_{t}(x) \bigr]^{2}\,dx. \end{aligned}$$
Taking into account the Minkowski inequality, we see that the series \(\sum_{m=0}^{\infty}r^{2}_{m}(f)\) converges. Since
$$\begin{aligned} &\Biggl|\sum_{m=0}^{\infty} \bigl(r_{m}(f)\bigr)^{2}-\sum_{m=0}^{\infty } \bigl(r_{m}(f_{t})\bigr)^{2}\Biggr| \\ &\quad=\Biggl|\sum_{m=0}^{\infty } \bigl[r_{m}(f)-r_{m}(f_{t})\bigr] \bigl[r_{m}(f)+r_{m}(f_{t})\bigr]\Biggr| \\ &\quad\leq \Biggl(\sum_{m=0}^{\infty}\bigl| r_{m}(f)-r_{m}(f_{t})\bigr|^{2} \Biggr)^{\frac{1}{2}} \Biggl(\sum_{m=0}^{\infty} \bigl| r_{m}(f)+r_{m}(f_{t})\bigr|^{2} \Biggr)^{\frac{1}{2}}, \end{aligned}$$
we deduce that \(\sum_{m=0}^{\infty}\{r_{m}(f_{t})\}^{2}\rightarrow\sum_{m=0}^{\infty}r_{m}^{2}(f)\) as \(t\rightarrow\infty\). Moreover, from the convergence in the mean of \(f_{t}(x)\) to \(f(x)\) we derive that
$$\begin{aligned} &\lim_{t\rightarrow\infty}\Biggl(\sum _{k=0}^{n}\frac {1}{a_{k+1}^{2}}\prod _{i=0}^{k} \theta_{i34} \prod _{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+}^{\xi_{k+1}-}f_{t}^{2}(x)\,dx \Biggr) \\ &\quad= \sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}} \prod_{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi _{k}+}^{\xi_{k+1}-} f^{2}(x)\,dx. \end{aligned}$$
Finally, letting \(t\rightarrow\infty\) in the equality
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+}^{\xi_{k+1}-} f_{t}^{2}(x)\,dx = \sum_{m=0}^{\infty}\bigl(r_{m}(f_{t}) \bigr)^{2}, \end{aligned}$$
we obtain (31) for arbitrary \(f\in \bigoplus_{i=1}^{n+1}L_{2}(\Omega_{i})\). The proof is complete. □
Modified Carleman equality
We now return to formula (18), whose right-hand side has been called the resolvent. Let
$$\begin{aligned} y(x,\lambda)= \sum_{n=0}^{\infty}t_{n}( \lambda)\psi_{n}(x). \end{aligned}$$
Then, from (22) we have
$$\begin{aligned} r_{m}(f) =&\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+}^{\xi_{k+1}-} (a(x)y(x)''+ \bigl(\lambda-q(x)y(x)\bigr)\psi_{m}(x)\,dx \\ =&-\lambda _{m}t_{m}(\lambda)+t_{m}(\lambda). \end{aligned}$$
Hence, \(t_{m}(\lambda)=\frac{r_{m}}{\lambda-\lambda_{m}}\), and the expansion of the resolvent is
$$\begin{aligned} y(x,\lambda) =&\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,s;\lambda)f(s)\,ds \\ =& \sum_{m=0}^{\infty}\frac{r_{m}\psi_{m}(x)}{\lambda-\lambda_{m}}. \end{aligned}$$
From this an important formula can now be derived. Substituting equality (29) into the right-hand side of (36), we find that
$$\begin{aligned} &\sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,s;\lambda)f(s)\,ds \\ &\quad=\sum_{m=0}^{\infty}\frac{\psi _{m}(x)}{\lambda-\lambda_{m}} \Biggl\{ \sum_{k=0}^{n}\frac {1}{a_{k+1}^{2}}\prod _{i=0}^{k}\theta_{i34} \prod _{i=k+1}^{n+1}\theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} f(s)\psi_{m}(s)\,dx\Biggr\} . \end{aligned}$$
Since \(f(s)\) is arbitrary,
$$\begin{aligned} G(x,s;\mu)=\sum_{m=0}^{\infty} \frac{\psi_{m}(x)\psi_{m}(s)}{\mu-\lambda_{m}}. \end{aligned}$$
Thus, we obtain
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,x;\mu)\,dx= \sum _{m=0}^{\infty}\frac{1}{\mu-\lambda_{m}}. \end{aligned}$$
Denoting by \(S(\lambda)\) the number of eigenvalues \(\lambda_{n}\) less than λ, from (39) we get the modified Carleman equation for our problem (1)-(5)
$$\begin{aligned} \sum_{k=0}^{n} \frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k} \theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12} \int _{\xi_{k}+0}^{\xi_{k+1}-0} G(x,x;\mu)\,dx= \int_{0}^{\infty}\frac{d S(\lambda)}{\mu-\lambda}. \end{aligned}$$
The Rayleigh quotient and minimization principle for problem (1)-(5)
Let \((\lambda,\psi)\) be an eigen-pair for linear operator \(\mathcal{A}\) in the Hilbert space \(\mathcal{H}\), that is, \(\mathcal{A}\psi=\lambda\psi\). From this equality it follows that
$$\lambda=\frac{\langle\mathcal{A}\psi,\psi\rangle_{\mathcal{H}}}{\|\psi\| _{\mathcal{H}}^{2}}. $$
This expression (the so-called Rayleigh quotient) enables to relate an eigenvalue λ to its eigenfunction ψ. Especially in quantum physics it is important to find the first eigenvalue. The Rayleigh quotient plays an important role in this content.
(Rayleigh quotient)
Let \((\lambda,\psi)\) be an eigen-pair for the Sturm-Liouville differential equation (1). Then the Rayleigh quotient for problem (1)-(5) takes the form
$$\begin{aligned} \lambda :=& R(\psi) \\ =& \frac{\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\{a_{k}^{2}(\psi\psi'|_{\xi _{k}+0}^{\xi_{k+1}-0})+ \int _{\xi_{k}+0}^{\xi_{k+1}-0}(a_{k}^{2}(\psi')^{2}+q\psi ^{2})\,dx\}}{\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}\psi^{2}\,dx}. \end{aligned}$$
The needed Rayleigh quotient (41) can be derived from the Sturm-Liouville equation
$$ -a(x)\psi^{\prime\prime}(x)+ q(x)\psi(x)=\lambda\psi(x),\quad x\in \Omega, $$
by multiplying by ψ and integrating over Ω. Then we have
$$\lambda= \frac{-\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\{a_{k}^{2}\int _{\xi _{k}+0}^{\xi_{k+1}-0}\psi\psi''\,dx+ \int _{\xi_{k}+0}^{\xi_{k+1}-0}q\psi^{2}\,dx\}}{\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}\psi^{2}\,dx}. $$
Integrating by parts gives equation (41). □
Equation (41) is the Rayleigh quotient for considered problem (1)-(5).
(Minimization principle)
The infimum of the Rayleigh quotient for all nonzero continuous functions satisfying the boundary-transmission conditions (2)-(4) is equal to the least eigenvalue, that is,
$$\begin{aligned} \lambda_{1} &:= \inf R(y) \\ &= \inf \frac{ -\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1} \theta_{i12}\{ a_{k}^{2}(\psi\psi'|_{\xi_{k}+0}^{\xi_{k+1}-0})+ \int _{\xi_{k}+0}^{\xi_{k+1}-0}(a_{k}^{2}(y')^{2} +qy^{2})\,dx\}}{ -\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}y^{2}\,dx}. \end{aligned}$$
Suppose that \(\{\lambda_{n}\}\) is an increasing sequence of all eigenvalues of the Sturm-Liouville problem (1)-(5). Let us write the Rayleigh quotient in the form
$$\begin{aligned} R(y)= \frac{-\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}y\mathcal{L}_{k}y\,dx}{-\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}y^{2}\,dx}, \end{aligned}$$
where \(\mathcal{L}_{k}y:=-a_{k}^{2}y''+qy\). Now, we expand the arbitrary function y in terms of the orthogonal eigenfunctions \(\psi_{n}\). Denote \(\Gamma:=\{y\in\bigoplus_{i=1}^{n+1} C^{2}(\Omega_{i}):\mbox{there exist finite one-hand limits}\mbox{ }y^{k}(0+0), y^{(k)}(\pi-0), y^{(k)}(\xi_{i}\mp0) \mbox{ for } i=\overline{1,n }, \mathcal{L}_{\alpha}y=\mathcal{L}_{\beta}y=\mathcal{L}_{2k-1}y=\mathcal{L}_{2k}y=0, k=1,2,\ldots,n, y\neq0\}\). If \(y\in \Gamma\), then the series
$$\begin{aligned} y(x)=\sum_{m=0}^{\infty}r_{m} \psi_{m}(x) \end{aligned}$$
converges uniformly to y, where \(r_{m}=r_{m}(y)\) is the Fourier coefficient of y with respect to the orthogonal set \(\psi_{n}\). By applying the standard well-known technique we can show that
$$\begin{aligned} \mathcal{L}y=\sum_{m=1}^{\infty}r_{m} \lambda_{m}\psi_{m}. \end{aligned}$$
Now substitution of (45) and (46) into (44) gives us
$$\begin{aligned} R(y)= \frac{-\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}(\sum_{m=1}^{\infty}\sum_{s=1}^{\infty }r_{m}r_{s}\lambda _{s}\psi_{m}\psi_{s})\,dx}{-\sum_{k=0}^{n}\frac{1}{a_{k+1}^{2}}\prod_{i=0}^{k}\theta_{i34} \prod_{i=k+1}^{n+1}\theta_{i12}\int _{\xi_{k}+0}^{\xi _{k+1}-0}(\sum_{m=1}^{\infty}\sum_{s=1}^{\infty}r_{m}r_{s} \psi_{m}\psi_{s})\,dx}. \end{aligned}$$
Since the eigenfunctions \({\psi_{n}}\) are orthogonal, equation (47) becomes
$$\begin{aligned} R(y)= \frac{\sum_{m=1}^{\infty}r_{m}^{2}\lambda _{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}} }{\sum_{m=1}^{\infty}r_{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}}}. \end{aligned}$$
Let \(\lambda_{1}\) be the principal eigenvalue (\(\lambda_{1}<\lambda_{m}\) for all \(m\geq1\)). Then
$$\begin{aligned} R(y)= \frac{\lambda_{n}\sum_{m=1}^{\infty}r_{m}^{2}\|\psi_{m}\| ^{2}_{\mathcal{H}} }{\sum_{m=0}^{\infty}r_{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}}}\geq \frac{\lambda_{1}\sum_{m=1}^{\infty}r_{m}^{2}\|\psi_{m}\| ^{2}_{\mathcal{H}} }{\sum_{m=0}^{\infty}r_{m}^{2}\|\psi_{m}\|^{2}_{\mathcal {H}}}= \lambda_{1}. \end{aligned}$$
Therefore, \(R(y)\geq\lambda_{1}\) for all \(y\in\Gamma\), and thus \(\inf R(y)\geq\lambda_{1}\). On the other hand, it is obvious that \(R(y_{1})=\lambda_{1}\), where \(y_{1}\) is an eigenfunction corresponding to the least eigenvalue \(\lambda_{1}\). The proof complete. □
In fact, it is proven that \(\lambda_{1}=\min R(y)\).
Let \(\lambda_{1}<\lambda_{2}<\cdots\) be the eigenvalues of problem (1)-(5). Denote \(\Gamma_{k}:=\{y\in \Gamma:\langle y,\psi_{i}\rangle=0, i=1,2,\ldots,k\}\). Then we have the equality
$$\begin{aligned} \lambda_{k+1}=\min_{y\in \Gamma_{k},y\neq0} R(y)=\min _{y\in\Gamma_{k},y\neq0} \frac{\sum_{m=k+1}^{\infty}r_{m}^{2}\lambda _{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}} }{\sum_{m=k+1}^{\infty}r_{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}}}. \end{aligned}$$
Consider relation (48). Let \(y\in \Gamma_{k}\), \(y\neq0\). Then \(r_{j}=0\) (\(j=1,2,\ldots,k\)), and, consequently, by (47) we have
$$\begin{aligned} R(y)= \frac{\sum_{m=k+1}^{\infty}r_{m}^{2}\lambda _{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}} }{\sum_{m=k+1}^{\infty}r_{m}^{2}\|\psi_{m}\|^{2}_{\mathcal{H}}}. \end{aligned}$$
Now since \(\lambda_{k}<\lambda_{m}\) for \(m>k+1\), it follows that \(R(y)\geq\lambda_{k+1}\), and, furthermore, the equality holds if \(r_{m}=0\) for \(m>k+1\) (i.e., \(y=r_{k+1}\psi_{k+1}\)). □
By applying the Rayleigh-Ritz formula (43) it is difficult to explicitly compute the principal eigenvalues. But using the Rayleigh quotient (41) with appropriate test functions, we can obtain a good approximation for the eigenvalues. Moreover, from formula (50) it follows that \(\lambda_{k}\leq R(z_{k})\) for each test function \(z_{k}\in\Gamma_{k}\). Thus, we can also find an upper bound for the kth eigenvalue.
Hinton, D: Spectral Theory and Computational Methods of Sturm-Liouville Problems. CRC Press, Boca Raton (1997)
Levitan, BM: The Eigenfunction Expansion for the Second Order Differential Operator (1950)
Marchenko, VA: Sturm-Liouville Operator and Application. Birkhäuser, Basel (1986)
Naimark, MA: The study of eigenfunction expansion of non-selfadjoint differential of the second order on the half line. Tr. Mosk. Mat. Obŝ. 3, 181-270 (1954)
MathSciNet Google Scholar
Rotenberg, M: Advances in Atomic and Molecular Physics, vol. 6 (1970)
Sherstyuk, AI: Problems of Theoretical Physics, vol. 3. Leningrad. Gos. University, Leningrad (1988)
Tupitsyn, II, Volotka, AV, Glazov, DA, Shabaev, VM: Magnetic-dipole transition probabilities in B-like and Be-like ions. Phys. Rev. A 72, 062503 (2005)
Germinet, F, Klein, A: Bootstrap multiscale analysis and localization in random media. Commun. Math. Phys. 222, 415-448 (2001)
Article MathSciNet MATH Google Scholar
Stollmann, P: Caught by Disorder, Bound States in Random Media. Progress in Math. Phys., vol. 20. Birkhäuser, Boston (2001)
Book MATH Google Scholar
Stollmann, P: Localization for acoustic waves in random perturbations of periodic media. Isr. J. Math. 107, 125-139 (1998)
Mamedov, KR, Cetinkaya, FA: Eigenparameter dependent inverse boundary value problem for a class of Sturm-Liouville operator. Bound. Value Probl. 2014, Article ID 194 (2014)
Titchmarsh, EC: Eigenfunctions Expansion Associated with Second Order Differential Equations, 2nd edn. Oxford University Press, London (1962)
MATH Google Scholar
Likov, AV, Mikhalilov, YA: The Theory of Heat and Mass Transfer, Qosenergaizdat (1963) (in Russian)
Titeux, I, Yakubov, Y: Completeness of root functions for thermal conduction in a strip with piecewise continuous coefficients. Math. Models Methods Appl. Sci. 7(7), 1035-1050 (1997)
Mukhtarov, OS, Kadakal, M, Muhtarov, FS: On discontinuous Sturm-Liouville problems with transmission conditions. J. Math. Kyoto Univ. 44(4), 779-798 (2004)
Mukhtarov, OS, Aydemir, K: Eigenfunction expansion for Sturm-Liouville problems with transmission conditions at one interior point. Acta Math. Sci. 35(3), 639-649 (2015)
Aliyev, ZS: Some global results for nonlinear fourth order eigenvalue problems. Cent. Eur. J. Math. 12(12), 1811-1828 (2014)
Aliyev, ZS, Kerimov, NB: Spectral properties of the differential operators of the fourth-order with eigenvalue parameter dependent boundary condition. Int. J. Math. Math. Sci. 2012, Article ID 456517 (2012)
Locker, J: Spectral Theory of Non-selfadjoint Two-Point Differential Operators. Am. Math. Soc., Providence (2000)
Mennicken, R, Möller, M: Non-self-Adjoint Boundary Eingenvalue Problems. North-Holland Mathematics Studies, vol. 192. North-Holland, Amsterdam (2003)
Tretter, C: On λ-Nonlinear Boundary Eigenvalue Problems. Mathematical Research, vol. 71. Akademie Verlag, Berlin (1993)
Yakubov, S: Completeness of Root Functions of Regular Differential Operators. Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 71. Longman, Harlow (1994); Copublished in the United States with Wiley, New York
Allahverdiev, BP, Bairamov, E, Ugurlu, E: Eigenparameter dependent Sturm-Liouville problems in boundary conditions with transmission conditions. J. Math. Anal. Appl. 401(1), 388-396 (2013)
Amirov, RK, Ozkan, AS: Discontinuous Sturm-Liouville problems with eigenvalue dependent boundary condition. Math. Phys. Anal. Geom. 17, 483-491 (2014). doi:10.1007/s11040-014-9166-1
Aydemir, K, Mukhtarov, OS: Second-order differential operators with interior singularity. Adv. Differ. Equ. 2015, Article ID 26 (2015). doi:10.1186/s13662-015-0360-7
Aydemir, K, Mukhtarov, OS: Green's function method for self-adjoint realization of boundary-value problems with interior singularities. Abstr. Appl. Anal. 2013, Article ID 503267 (2013). doi:10.1155/2013/503267
Bairamov, E, Ugurlu, E: The determinants of dissipative Sturm-Liouville operators with transmission conditions. Math. Comput. Model. 53(5/6), 805-813 (2011)
Kandemir, M: Irregular boundary value problems for elliptic differential-operator equations with discontinuous coefficients and transmission conditions. Kuwait J. Sci. Eng. 39(1A), 71-97 (2010)
Kadakal, M, Mukhtarov, OS: Sturm-Liouville problems with discontinuities at two points. Comput. Math. Appl. 54, 1367-1379 (2007)
Mukhtarov, OS, Yakubov, S: Problems for differential equations with transmission conditions. Appl. Anal. 81, 1033-1064 (2002)
Mukhtarov, OS, Aydemir, K: New type Sturm-Liouville problems in associated Hilbert spaces. J. Funct. Spaces Appl. 2014, Article ID 606815 (2014)
Hıra, F, Altınışık, N: Sampling theorems for Sturm-Liouville problem with moving discontinuity points. Bound. Value Probl. 2015, Article ID 237 (2015)
Article MATH Google Scholar
Petrovsky, IG: Lectures on Partial Differential Equations, 1st English edn. Interscience Publishers, New York (1954) (translated from Russian by A. Shenitzer)
The authors would like to thank the referees for their valuable comments.
Education of Faculty, Giresun University, Giresun, 28100, Turkey
Kadriye Aydemir
Department of Mathematics, Faculty of Arts and Science, Gaziosmanpaşa University, Tokat, 60250, Turkey
Oktay Sh Mukhtarov
Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences, Baku, Azerbaijan
Correspondence to Kadriye Aydemir.
The authors contributed equally to this work. The authors read and approved the final manuscript.
Aydemir, K., Mukhtarov, O.S. Variational principles for spectral analysis of one Sturm-Liouville problem with transmission conditions. Adv Differ Equ 2016, 76 (2016). https://doi.org/10.1186/s13662-016-0800-z
DOI: https://doi.org/10.1186/s13662-016-0800-z
Sturm-Liouville problems
boundary-transmission conditions
transmission conditions
expansions theorem
Rayleigh-Ritz formula
Parseval equality
Carleman equation | CommonCrawl |
Italian Economic Journal
November 2015 , Volume 1, Issue 3, pp 333–351 | Cite as
How Good are Out of Sample Forecasting Tests on DSGE Models?
Patrick Minford
Yongdeng Xu
Peng Zhou
2k Downloads
Out-of-sample forecasting tests of DSGE models against time-series benchmarks such as an unrestricted VAR are increasingly used to check (a) the specification and (b) the forecasting capacity of these models. We carry out a Monte Carlo experiment on a widely-used DSGE model to investigate the power of these tests. We find that in specification testing they have weak power relative to an in-sample indirect inference test; this implies that a DSGE model may be badly mis-specified and still improve forecasts from an unrestricted VAR. In testing forecasting capacity they also have quite weak power, particularly on the lefthand tail. By contrast a model that passes an indirect inference test of specification will almost definitely also improve on VAR forecasts.
Out of sample forecasts DSGE VAR Specification tests Indirect inference Forecast performance
We are grateful to participants in the 2014 Konstanz Seminar on Monetary Theory and Policy for discussions of an early contribution to these issues; also to an anonymous referee for most useful comments on an earlier version of this paper.
JEL Classification
In recent years macro-economists have turned to out-of-sample forecasting (OSF) tests of Dynamic Stochastic General Equilibrium (DSGE) models as a way of determining their value to policymakers both for deciding policy and for improving forecasts. Thus for example Smets and Wouters (2007) showed that their model of the US economy could beat a Bayesian Vector Auto Regression (VAR) or BVAR, their point being that while they had estimated the model by Bayesian methods with strong priors there was a need to show also that the model could independently pass a (classical specification) test of overall fit, otherwise the priors could have dominated the model's posterior probability. Further papers have documented models' OSF capacity, including Gürkaynak et al. (2013); see Wickens (2014) for a survey of recent attempts by central banks to evaluate their own DSGE models' OSF capacity.1 But how good are these OSF tests? This question is what this paper sets out to answer.
The value of DSGE models' OSF capacity to policymakers comes as we said from two main motivations.
The first is to use DSGE models to improve economic forecasting. One can think of an unrestricted VAR as a method that uses data to forecast without imposing any theory. Then if one knows the true theory one can improve the efficiency of these forecasts by imposing this theory on the VAR, to obtain the restricted VAR. This will improve the forecasts, reducing the Root Mean Square Error (RMSE) of forecasts at all horizons. However imposing a false parameter structure on the VAR may produce worse forecasts; the further from the truth the parameters are the worse the forecasts. There will be some 'cross-over point' along this falseness spectrum at which the forecasts deteriorate compared with the unrestricted VAR.
The second reason is the desire to have a well-specified model that can be used reliably in policy evaluation; clearly in assessing the effects of a new policy the better-specified the model, the closer it will get to predicting the true effects. The assessment of the DSGE model's forecasting capacity is being used by policymakers with this desire, as a means of evaluating the extent of the model's mis-specification.
Notice that the two motivations are linked by the requirement of a well-specified model. Thus for the DSGE model to give better forecasts than the unrestricted VAR it needs to be not too far from the true model—i.e. the right side of the cross-over point. It is harder for us to judge how close the model needs to be to the truth for a policy evaluation: this will depend on how robust the policy is to errors in its estimated effects and this will vary according to the policy in question. But we can conclude that both reasons require us to be confident about the model's specification.
Thus evaluations of the DSGE model's forecasting capacity, to be useful, should provide us with a test of the model's specification; and this indeed is how these evaluations are presented to us. Typically the model's forecasting RMSE is compared with that of an unrestricted VAR, e.g. the ratio of the model's RMSE to that of the VAR; there is a distribution for this ratio for the sample size involved and we can see how often the particular model's forecasts give a ratio in say the 5 % tail, indicating model rejection. The asymptotic distribution for this ratio (of two t-distributions) cannot be derived analytically; but we establish below by numerical methods that it is a t-distribution.
The questions we ask in this paper are:
What is the small sample distribution for this ratio for a model (1) if it is true and (2) if it is marginally able to improve other forecasts?
How much power do these OSF evaluations have, viewed as a test of a DSGE model's specification? In other words can we distinguish clearly between the forecasting performance of a badly mis-specified model and the true model.
Can we say anything about the relationship between a DSGE model's degree of mis-specification and its forecasting capacity? There is a large literature on forecast success of different sorts of models—Clements and Hendry (2005); Christoffel et al. 2011). We would like to see how success is related to specification error.
We investigate these questions using Monte Carlo experiments for a model of the DSGE type being evaluated here; we do so using sample sizes for the out-of-sample forecasts that are of the same order as those used in these tests and so rely not on the asymptotic but on the small sample distributions of the models. In Sect. 2 that follows we explain the OSF tests of a DSGE model. In Sect. 3 we set out the Monte Carlo experiments and show the power of OSF tests of a DSGE model's specification. In Sect. 4 we establish some links between a DSGE model's specification error and its capacity to improve forecasts. Section 5 concludes.
DSGE Models Out-of-Sample Forecasting Tests
DSGE Model OSFs
A DSGE model (e.g. that of Smets and Wouters 2007, henceforth SW) has a general form:
$$\begin{aligned} A_0 E_t (y_{t+1} )= & {} A_1 y_t +B_0 z_t \nonumber \\ z_{t+1}= & {} Rz_t +\varepsilon _{t+1} \end{aligned}$$
where \(y_{t+1} \) are endogenous variables, \(z_t \) are exogenous variables, typically errors, which may be represented by an autoregressive process in which \(\varepsilon _{t+1} \) are shocks (i.e. \(NID(0,\,\Sigma )\)). The solution to a DSGE model can be represented by a restricted VAR:
$$\begin{aligned} x_{t+1} =Ax_t +B\varepsilon _{t+1} \end{aligned}$$
where \(x_{t+1} =(y_{t+1} ,\,z_{t+1})'\). The coefficient matrices A and B are full rank but restricted.
A and B can be derived analytically (see Wickens 2014). Alternatively, if we input the parameter set \(\Omega =\{A_0 ,\,A_1 ,\,B_0 ,\,R\}\) into the programme Dynare (Juillard 2001), then A and B in (2) can be derived by it. OSFs are then derived straightforwardly from (2). Suppose the initial forecast origin is m, then the OSFs are:
$$\begin{aligned} \hat{x}_{m+1}= & {} Ax_m \nonumber \\ \hat{x}_{m+2}= & {} A\hat{x}_{m+1} =A^2x_m \nonumber \\&\ldots \nonumber \\ \hat{x}_{m+l}= & {} A\hat{x}_{m+l-1} =A^lx_m \end{aligned}$$
where \(l=1,\,2,\ldots h\). \(\hat{x}_{m+l} \) denotes the l-step ahead forecast. We also create False models whose parameters are altered from those of the True one in a manner we explain below.
VAR Model OSFs
Consider the first order VAR
$$\begin{aligned} y_{t+1} =Py_t +\varepsilon _{t+1} \end{aligned}$$
where \(\varepsilon _t \) is assumed to be \(NID(0,\,\Sigma )\). Suppose the initial forecast origin is m, the OSFs are:
$$\begin{aligned} \hat{y}_{m+1}= & {} \hat{P}_m y_m \nonumber \\ \hat{y}_{m+2}= & {} (\hat{P}_m )^2y_m \nonumber \\&\ldots \nonumber \\ \hat{y}_{m+l}= & {} (\hat{P}_m )^ly_m \end{aligned}$$
where \(\hat{P}_{m} \) is OLS (or MLE) estimates of VAR coefficients, i.e. \(\hat{P}_m \!=\![y_{m}^{'} y_{m} ]^{-1}y_{m}^{\prime } y_{m+1} .\)
OSF Tests
The root mean square error (RMSE) of a forecast is defined as:
$$\begin{aligned} \textit{RMSE}_j (l)=\sqrt{\frac{1}{T-l-m}\mathop \sum \limits _{m=M}^{T-l} (y_{m+l} -\hat{y}_{j,\,m+l} )^2} \end{aligned}$$
where \(y_{m+l} \) is the true data, \(\hat{y}_{j,\,m+l} \) is its out of sample forecasts from model j; M is the initial forecast origin. \(l=1,\,2,\ldots ,h\) denotes the l-step ahead forecast. We look at the 4-quarter-ahead (4Q) and 8-quarter-ahead (8Q) forecasts. T is the sample size. \(j=1,\,2\) denotes the two competing models, say M1 is the DSGE model, M2 is the unrestricted VAR model. Then \(\textit{RMSE}_j (l)\) is the root mean squared forecast error for the l-step-ahead forecast of model j.
The OSF test is carried out on the ratio of the RMSE of the DSGE model to that of the VAR:
$$\begin{aligned} Ratio(l)=\frac{\textit{RMSE}_{\textit{vDSGE}} (l)}{\textit{RMSE}_{\textit{VAR}} (l)} \end{aligned}$$
Since it is hard to find the asymptotic distribution for the OSF Ratio test, we use Monte Carlo methods and when the error distribution is unknown, the bootstrap. By these methods, described in detail below, we obtain the empirical distribution of the OSF Ratio. From this distribution, we find (say) the 95 % percentile and use it as the empirical critical value. Since the tests considered are one-sided tests, the p-value of the OSF Ratio test is the percentage of the empirical distribution above the test statistic. It should be noted that the empirical critical value varies with sample size, forecast origin and forecast horizons.
To compare the out-of-sample forecasting ability, there are two alternative statistics that focus on the difference of the minimum mean-squared forecast error (MSFE) between two nested models: the Diebold–Mariano and West (DMW) and the Clark-West (CW) statistics. Diebold and Mariano (1995) and West (1996) construct t-type statistics which are assumed to be asymptotically normal and where the sample difference between the two MSFE's are zero under the null. Clark and West (2006) and Clark and West (2007) provide an alternative DMW statistic that adjusts for the negative bias in the difference between the two MSFEs.
However in empirical analysis, both the DMW and CW test statistics take their critical values from their asymptotic distributions. Rogoff and Stavrakeva (2008) criticize the asymptotic CW test as oversized; an oversized asymptotic CW test would cause too many rejections of the null hypothesis. Rogoff and Stavrakeva (2008) and Ince (2014) propose to use the bootstrapped OSF test to avoid this size distortion in small samples.
Our bootstrapped OSF test statistics are similar to these. There is not too much difference between the simulated asymptotic distributions of the RMSE ratio and the RMSE difference. But we focus on the ratio of the RMSEs between the DSGE and the VAR model, as this is the measure usually adopted in macroeconomic forecasting studies, such as those discussed here.
The Power of OSF Tests
Monte Carlo Experiments
We follow the basic procedures of Le et al. (2011) to design the Monte Carlo experiment. We take the model of Smets and Wouters (2007) for the US and adopt their posterior modes for all parameters, including for error processes; the innovations are given their posterior standard errors with the normal distribution (Table 1a, b, Smets and Wouters 2007).
We set the sample size (T) at 200, and generate 1000 samples. We set the initial forecast origin (M) at 133. The VAR and DGSE autoregressive processes are initially estimated over the first 133 periods. The models were then used to forecast the data series 4- or 8-periods-ahead over the remaining 67 periods, with re-estimation every period (quarter). We find the distribution of this for the relevant null hypothesis under our small sample from our 1000 Monte Carlo samples. Our null hypothesis for the OSF tests is (1) the True DSGE model and (2) (discussed in Sect. 4) the False DSGE model that marginally succeeds in improving the forecast.
We follow Le et al. (2011) in specifying a False DSGE model. A False DSGE model is chosen by changing the parameters (\(A_0 ,\,A_1 ,\,B_0 )\) in the true model by \(+\) or \(-\) \(q\,\%\) alternately where q is the degree of falseness. We then extract the model residuals \((z_t )\) from the data, re-estimate the error process and get \(\hat{R}\). Le et al. (2011) consider two ways to extract the model residuals (the Limited Information estimation method, LIML, which projects expectations by Instrumental Variables and the Exact Method, which projects them as the DSGE model solution) and find their differences are trivial. We use the Exact Method to estimate the model residuals and get \(\hat{R}\) 2 Denoting the false parameters as \(\Omega ^F=\{A_0^F ,\,A_1^F ,\,B_0^F ,\,\hat{R}\},\)we can derive \(A^F\) from Dynare as before. The OSFs are calculated as in (3), except that we use \(A^F\) rather than A. The RMSE of the False DSGE model is:
$$\begin{aligned} RMSE_{DSGE}^F (l)=\sqrt{\frac{1}{T-l-m}\mathop \sum \limits _{m=M}^{T-l} (y_{m+l} -\hat{y}_{DSGE,\,m+l}^F )^2} \end{aligned}$$
where \(\hat{y}_{DSGE,\,m+l}^F \) is the OSF from the False DSGE model. The RMSE of the VAR model remains the same. Then we can obtain the ratio test statistic for each sample.
$$\begin{aligned} Ratio(l)=\frac{RMSE_{DSGE}^F (l)}{RMSE_{VAR} (l)} \end{aligned}$$
The power of the test is the probability of rejecting a hypothesis when it is false. In our OSF test, the power of the ratio test is the probability that the Ratio \(>\) the 5 % critical value for the True distribution.
Asymptotic Versus Small Sample Distributions
We begin with a discussion of how the distribution for our typical 200-size sample differs from the asymptotic. In the absence of an analytical expression for the asymptotic distribution we use a sample of 1000 as a proxy (as can be seen from Fig. 2 it is close to the \(t_\infty \) distribution)—we raise both the sample used to obtain the forecasts and the subsequent sample used to make the forecasts, in proportion, i.e. by 5 times. In this way we obtain five times the size of sample for estimation and five times as many forecasts for the evaluation; this mimics the idea of raising the data available to 'very large' amounts. Fig. 1 show that the 5 % critical value differs by more than 10 % between the two for the case shown here of the 4Q forecast which is typical.
We then normalise the ratio statistics by adjusting its mean and standard deviation. This is plotted against a normal distribution in Fig. 2. It can be observed that the large sample distribution is very close to a normal distribution. The 5 % critical value for the normalized large sample ratio is 1.543, which is close to 5 % critical value from the standard normal distribution (1.645).
In what follows all the distributions are based on Monte Carlo results for \(T=200\). For the sake of brevity we focus solely on the 5 % confidence level test.
Normalized ratio statistics and standard normal distribution
Power of the Specification Test at 5 % Nominal Value
The Power of the OSF tests at a 5 % nominal value are reported in Table 1. The first three sets of results are for each variable viewed alone. The last set relates to the joint forecast performance; for this we use the square root of the determinant of the joint forecast-error-covariance matrix (also used to measure the joint error in Smets and Wouters 2007).3 See appendix for the small sample distribution and the 5 % critical value associated with the OSF tests in Table 1.
Power of OSF test
Joint 3
% F
(1) The 4Q-ahead GDP growth forecast is rejected less when the model is 20 % false than when 15 % false; this could arise from the reestimation of the model error processes that takes place when each model version is created; this reestimation can offset the effects of falseness of parameters. Thus in the 20 % false model this offset could by chance be greater than for the 15 %. (2) Sometimes the rejection rate for 95 % confidence dips below 5 %; this can happen for the same reason that error reestimation can offset the effect of parameter falseness. (3) The Joint 3 rejection rate cannot be obtained as the average of the three individual rejection rates because the forecast behaviour of the three variables may be correlated; thus if a forecast fails on one variable it is more likely to fail on another, raising the joint failure rate
These results are obtained with stationary errors and with a VAR(1) as the benchmark model. We redid the analysis under the assumption that productivity was non-stationary. The results were very similar to those above. We further looked at a case of much lower forecastability, where we reduced the AR parameters of the error processes to a minimal 0.05 (on the grounds that persistence in data can be exploited by forecasters). Again the results were very similar, perhaps surprisingly. It seems that while absolute forecasting ability of a model, whether it is a DSGE or a VAR, is indeed reduced by lesser forecastability, relative forecasting ability is rather robust to data forecastability. Finally, we redid the original analysis using a VAR(2) as the benchmark; this also produced similar results to those above. All these variants, designed to check the robustness of our results, are to be found in Appendix B.
What we see from Table 1 is that the power is weak. On a 1-year-ahead forecast, 4Q, the rejection rate of the DSGE model on its joint performance remains low at the one year horizon until the model reaches 20 % falseness, and at the two year horizon does not get above 40 % even when the model is 20 % false. Notice also that the individual variable tests show some instability, which is due to the way the OSF uses reestimated error processes for each overlapping-sample forward projection: each time the errors are reestimated the full model in effect is changed and sometimes this improves its forecasting performance, sometimes worsens it. Thus forecast performance does not always deteriorate with rising parameter falseness, When all variables are considered jointly this is much less of a problem as across the different variables the effects of reestimation on forecast performance are hardly correlated.
To put this RMSE test in perspective consider the power of the indirect inference Wald test, in sample using a VAR(1) on the same three variables (GDP, inflation and interest rates)—taken from Le et al. (2012a) which also describes in full the procedures for obtaining the test, based on checking how far the DSGE model can generate in simulated samples the features found in the actual data sample (Table 2).
What we see from Table 2 is that the in-sample Wald II test has far more power. Why may this be the case? In forecasting, as we have just emphasised, DSGE models use fitted errors and when the model is mis-specified this creates larger errors which absorb the model's mis-specification; these new errors are projected into the future and could to some degree compensate for the poorer performance by the mis-specified parameters. To put this another way, as the DSGE model produces larger errors, reducing the relative input from the structural model proper, these larger errors take on some of the character of an unrestricted VAR. By contrast in indirect inference false errors compound the model's inability to generate the same data features as the actual data.
Rejection rates for Wald and likelihood ratio for 3 variable VAR(1)
Wald in-sample II
Joint 3:4Q
The connection between mis-specification and forecast improvement
For our small samples here we find that the cross-over point at which the DSGE model forecasts 1 year ahead less well on average than the unrestricted VAR is for output growth 1 % false, for inflation and interest rates 7 % false; for the three variables together it is also 7 %. This reveals that the lower the power of the forecasting test for a variable the more useful are False models in improving unrestricted VAR forecasts. Thus for output growth where power is higher, the DSGE model needs to be less than 1 % false to improve the forecast; yet for inflation and interest rates where the power is very weak a model needs only to be less than 7 % false to improve the forecast. This is illustrated in the two cases shown in Fig. 3. In the lower one the false distribution with a mean RMSE ratio of unity (where the DSGE model is on average only as accurate as the unrestricted VAR) is 7 % false; hence any model less false than this will have a distribution with a mean ratio of less than unity- and will therefore on average improve the forecast. In the upper one the false distribution with a mean RMSE ratio of unity is only 1 % false; so to improve output growth forecasts you need a model that is less than 1 % false. Essentially what is happening with weak power is that as the model becomes more false its RMSE ratio distribution moves little to the right, with the OSF performance deteriorating little; this, as we have pointed out, may be because as the model parameters worsen, the error parameters offset some of this worsening.
What this shows is that if all a policymaker cares about is improving forecasts and the power of the forecast test is weak, then a poorly specified model may still suffice for improvement and will be worth using. This could well account for the willingness of central banks to use DSGE models in forecasting in spite of the evidence from other tests that they are mis-specified and so unreliable for policymaking. We now turn to how central banks can check on the forecasting capacity of their DSGE models using OSF tests.
OSF Tests of Whether a DSGE Model Improves Forecasts
We now consider how policymakers could assure themselves of the forecasting capacity of their DSGE model. Here they set up the marginal forecast-failure model as the null hypothesis, illustrated as the red distributions in Fig. 3. This is the structure of the Diebold and Mariano (1995) test widely used to test the forecast accuracy of models. Notice that policymakers can either look at the right hand tail, which tests the null against the alternative that the model forecasts worse; if they use this test they are assuming in the event of non-rejection that the model forecasts just better—the benefit of the doubt goes to the model. Or they can look at the left hand tail which tests against the alternative that the model forecasts better; if they use this test they are assuming in the event of non-rejection that the model is not worth using- the benefit of the doubt goes to the VAR forecast. If they obtain a result in the left hand tail, then they can be sure, at least with 95 % confidence, that the model will improve forecasts. If they obtain a result in the right hand tail, then again they can be sure, at lest with 95 % confidence, that the model will worsen forecasts. We need to check the power of each tail: how fast rejection rises on the RH tail as models get worse and on the LH tails how fast it rises as models get better. The situation is illustrated in Fig. 4.
Illustration of LH and RH tails
Power of Left Hand and Right Hand Tails
Table 3 shows for the joint-3 case (the results for individual variables are reported in the appendix) the power of the Left Hand and Right Hand tails as just discussed. Thus for the LH tail we show the chances of less False models being rejected, while for the RH tail we show the chances of more False models being rejected.
Power of OSF tests: LHT and RHT
Joint (Det)-RHTail
Joint (Det)-LHTail
The main problem with these tests remains that of poor power.
On the one hand, policymakers could use a DSGE model that was poor at forecasting without detection by the RH tail test. Thus for example a model that was 3 % more false than the marginal one would only be rejected on the crucial 4Q-ahead test 11.3 % of the time on the RH tail.
On the other hand, they could refuse to use a DSGE model that was good at forecasting without detection; for example a model that was 3 % less False than the marginal one would only be rejected on the 4Q-ahead test by the LH tail 9.8 % of the time.
We can design a more powerful test by going back to Table 2 and using simply the right hand tail as a test of specification. What is needed is a test of the DSGE model's specification (as true) that has power against a model that is so badly specified that it would marginally worsen forecasting performance on the joint 3 variables- the marginal forecast-failure model: as we have seen such a model is at the 4Q horizon 7 % false and at the 8Q horizon 15 % false. Now the power of OSF specification tests against such a bad model is larger: Table 3 below shows that if on an OSF 4Q test at 95 % confidence a model is not rejected (as true), then the marginal forecast-failure model (the 7 % false model) has a 22.9 % chance of rejection. On an 8Q test the equivalent model (15 % false) has a 29.5 % chance of rejection. Thus the OSF test has better power against the marginal forecast-failure model; but it is still quite weak.
Policymakers could however use the II in-sample test of whether the model is true also shown in that Table. Against the 4Q 7 % false model it has power of 99.4 %, and against the 8Q 15 % false model power of 100 %. Thus if policymakers could find a DSGE model that was not rejected by the II test, then they could have complete confidence that it could not worsen forecasts.
If no DSGE model can be found that fails to be rejected, then this strategy would not work and one must use the Diebold–Mariano test faute de mieux, on whatever DSGE model comes closest to passing the II specification test.
Reviewing the Evidence of OSF Tests
In this subsection we review some of the available OSF tests of DSGE models against time-series alternatives and see how we could interpret them in the light of these Monte Carlo experiments. Our aim is not to go through all such tests but merely to illustrate from some prominent ones how one might interpret the available evidence; we choose in particular those of Smets and Wouters (2007) and Gürkaynak et al. (2013) for the Smets and Wouters (2007) model of the US on which our Monte Carlo experiment is also focused (Table 4).
DSGE/Time-series RMSE ratio for SW real-time data
RMSE:
Gürkaynak et al. (2013)
\(\pi \)
\(\Delta y\)
Smets and Wouters (2007)
Source: Gürkaynak et al. (2013), SW post-war model- for 1992–2007 as OSF period. NB they report the inverse of these ratios. Smets and Wouters (2007), SW model—for 1990–2004 as OSF period. NB they report the percentage gains relative to VAR(1) model; we convert these to RMSE ratios
If we first consider the forecasting performance of these DSGE models, what we see from Table 4 is that the RMSE ratio of DSGE models relative to different time-series forecasting methods varies from better to worse according to which variable and which time-series benchmark is considered: Gürkaynak et al. (2013) note that there is a wide variety of relative RMSE performance. Wickens (2014) who reviews a wide range of country/variable forecasts finds the same. No joint performance measures are reported in these papers; however Smets and Wouters (2007)'s joint ratio comes out at 0.8 against a VAR(1) 4Q-ahead and 0.66 8Q-ahead.4 Thus on these joint ratios the LH tail rejects the marginal forecast-failure model, strong evidence that the SW model forecasts better than a VAR1.
If we turn now to consider DSGE models' specification from these results, we see first that in general they do not reject these DSGE models. But because of the low power of the OSF tests, the same would be true with rather high probability of quite false models. Le et al. (2011) show that the SW model is strongly rejected by the II Wald test, which is consistent with these OSF results, since as we have seen a false DSGE model may still forecast better than a VAR. They went on to find a version of the model, allowing for the existence of a competitive sector, that was not rejected for the Great Moderation period. By the arguments of this paper this model must also improve on time-series forecasts.
OSF tests are now regularly carried out on DSGE models against time-series benchmarks such as the VAR1 used here as typical. These tests aim to discover how good DSGE models are in terms of (a) specification (b) forecasting performance. Our aim in this paper has been to discover how well these tests achieve these aims.
We have carried out a Monte Carlo experiment on a DSGE model of the type commonly used in central banks for forecasting purposes and on which out-of-sample (OSF) tests have been conducted. In this experiment we generated the small sample distribution of these tests and also their power as a test of specification; we found that the power of the tests for this purpose was extremely low. Thus when we apply these results to the reported tests of existing DSGE models we find that none of them are rejected on a 5 % test; but the lack of power means that models that were substantially false would have a very high chance also of not being rejected. Researchers could therefore have little confidence in these tests for this purpose. We show that they would be better off using an in-sample indirect inference test of specification which has substantial power.
The reason for this relative weakness of OSF tests on DSGE models may be that the model errors, which are increased by the model mis-specification, nevertheless when projected forward compensate for the poorer forecast of the structural parameters. It follows that weak power implies that a DSGE model may be badly mis-specified and yet still forecast well. Thus a corollary of the low power is that DSGE models can still improve forecasts even when badly misspecified.
Viewed as tests of forecasting performance against the null of doing exactly as well as the VAR benchmark, OSF tests of DSGE models are used widely, with both the left hand tail of the distribution testing for significantly better performance and the right hand tail for significantly worse performance. Power is again rather weak, particularly on the left hand tail. An alternative would again be to use an in-sample indirect inference test of specification; if a DSGE model specification can be found that passes such a test, then it may not only be fit for policy analysis but will also almost definitely improve VAR forecasts.
Other papers that have computed OSF performance of DSGE models relative to time-series models include: Adolfson et al. (2007), Edge and Gürkaynak (2010), Edge et al. (2010), Giacomini and Rossi (2010), and Del Negro and Schorfheide (2012).
We only reestimate the errors for a given False model (for each overlapping sample). If we reestimated the whole False model each period, it would have variable falseness.
It is defined as follows. Let \(f_y ,\,f_\pi ,\,f_r \) be the OSF errors of output growth, inflation and interest rate respectively. Denote\(f=(f_y ,\,f_\pi ,\,f_r )'\). Then f is a (\(T-l-m)*3\) matrix. We can calculate the covariance of f. The joint RMSE is defined as \(\sqrt{\vert cov(f)\vert } .\)
Smets and Wouters (2007) calculate the overall percentage gain as \((\log (\vert cov(f_{VAR} )\vert )-\log (\vert cov(f_{VAR} \vert )-\log (\vert cov(f_{DGE} )\vert )/2k\), where k is the number of variables (here \(=\) 3). We convert this to joint ratio as follows: \((\log (\vert cov(f_{VAR} )\vert )-\log (\vert cov(f_{DGE} )\vert )/2k=-(log\sqrt{\vert cov(f_{DSG} )\vert }-log\sqrt{\vert cov(f_{VAR} )\vert } )/k\approx -\frac{\sqrt{\vert cov(f_{DSG} )\vert } -\sqrt{\vert cov(f_{VAR} )\vert } }{\sqrt{\vert cov(f_{VAR} )\vert } *k}=-\frac{JRMSE_{DSG} -JRMSE_{VAR} }{JRMSE_{VAR} *k}-(JointRatio+1)/k\).
Appendix A: Small Sample Distribution and 5 % Critical Values of OSF Tests
See Appendix Fig. 5 and Table 5.
Historical distribution of ratio statistics: T \(=\) 200
Empirical critical value at 5 percent level
Joint 3 variables
Appendix B: Experiments With Alternative Error Processes
Productivity Shock Follows an I(1) Process
We look here at the effect of non-stationarity in the shocks as exemplified by a non-stationary productivity process. We do not alter the status of other shocks because they are are typically found to be stationary for the SW model: for example in related work on the SW data Le et al. (2012b) found that only productivity was non-stationary—see their Table 2 on p. 11. The results are reported in Table 6.
Joint3 variables
There is essentially no difference in the power of the test as productivity becomes I(1), thereby also making output I(1) (though leaving inflation and interest rates stationary). The change makes output growth positively instead of negatively autocorrelated and so may well make little difference to how easy it is to forecast.
The choice on stationarity is dictated by the general absence of unit roots in shocks other than productivity- for example in related work on the SW data Le et al. (2012b) found that only productivity was non-stationary—see Table 2 on p. 11 of "What causes banking crises? An empirical investigation" by Vo Phuong Mai Le, David Meenagh and Patrick Minford, Working Paper No. E2012/14, Cardiff University, Economics Section, Cardiff Business School, June 2012, updated April 2013—available from Minford repec page.
Altering the Forecastability of the Economy
One might think that the power of the test would be affected by ease of forecasting the economy. We look at this issue by reducing the AR coefficients of the error processes to 0.05 from their SW values.
What we see the power that is not dissimilar to that in Table 7.
Altering the Benchmark Model
One might be concerned that the power of the test would be affected by using high order VARs. So we choose VAR(2) as benchmark model and redo the power of the test. The results are reported in the Table 8.
With VAR(2) as the benchmark model, the OSF tests have similarly low power. The AR(2) coefficients are mostly insignificant; including high order terms worsens the VAR's forecast capacity. This is also consistent with other literature (e.g. Smets and Wouters 2007; Wickens 2014) in which a VAR(1) is often chosen as the benchmark model.
Appendix C: OSF Tests of Whether a DSGE Model Improves Forecasts for Individual Variables
See Appendix Tables 9 and 10.
Power of OSF test: RHL
Joint (Det)
Power of OSF test: LHT
Adolfson M, Linde J, Villani M (2007) Forecasting performance of an open economy dynamic stochastic general equilibrium model. Econom Rev 26(2–4):289–328CrossRefGoogle Scholar
Christoffel K, Coenen G, Warne A (2011) Forecasting with DSGE models. In: Clements M, Hendry D (eds) Oxford handbook of economic forecasting. Oxford University Press, OxfordGoogle Scholar
Clark T, West KD (2006) Using Out-of-sample mean squared prediction errors to test the martingale difference hypothesis. J Econom 135:155–186CrossRefGoogle Scholar
Clark T, West KD (2007) Approximately normal tests for equal predictive accuracy in nested models. J Econom 138:291–311CrossRefGoogle Scholar
Clements M, Hendry D (2005) Evaluating a model by forecast performance. Oxford Bull Econ Stat 67(Supplement):931–956CrossRefGoogle Scholar
Del Negro M, Schorfheide F (2012) Forecasting with DSGE models: theory and practice. In: Elliott G, Timmermann A (eds) Handbook of forecasting, vol 2. Elsevier, New YorkGoogle Scholar
Diebold FX, Mariano RS (1995) Comparing predictive accuracy. J Bus Econ Stat 13:253–263Google Scholar
Edge RM, Kiley MT, Laforte JP (2010) A comparison of forecast performance between federal reserve forecasts, DSGE modelc. J Appl Econom 25(4):720–754CrossRefGoogle Scholar
Edge RM, Gürkaynak RS (2010) How useful are estimated DSGE model forecasts for central bankers? Brook Pap Econ Act 41(2):209–259CrossRefGoogle Scholar
Giacomini R, Rossi B (2010) Forecast comparisons in unstable environments. J Appl Econom 25(4):595–620CrossRefGoogle Scholar
Gürkaynak RS, Kisacikoglu B, Rossi B (2013) Do DSGE models forecast more accurately out-of-sample than VAR models? In: CEPR discussion paper no. 9576, July 2013. CEPR, LondonGoogle Scholar
Ince O (2014) Forecasting exchange rates out-of-sample with panel methods and real-time data. J Int Money Finance 43(C):1–18CrossRefGoogle Scholar
Juillard M (2001) DYNARE: a program for the simulation of rational expectation models. In: Computing in economics and finance, p 213Google Scholar
Le VPM, Meenagh D, Minford P, Wickens M (2012a) Testing DSGE models by indirect inference and other methods: some Monte Carlo experiments. Cardiff economics working paper E2012/15Google Scholar
Le VPM, Meenagh D, Minford P, Wickens M (2012b) What causes banking crises? An empirical investigation. Cardiff economics working paper E2012/14Google Scholar
Le VPM, Meenagh D, Minford P, Wickens M (2011) How much nominal rigidity is there in the US economy—testing a New Keynesian model using indirect inference. J Econ Dyn Control 35(12):2078–2104CrossRefGoogle Scholar
Rogoff KS, Stavrakeva V (2008) The continuing puzzle of short-horizon exchange rate forecasting, NBER W.P. 14071Google Scholar
Smets F, Wouters R (2007) Shocks and frictions in US business cycles: a Bayesian DSGE approach. Am Econ Rev 97(3):586–606CrossRefGoogle Scholar
West KD (1996) Asymptotic Inference about predictive ability. Econometrica 64:1067–1084CrossRefGoogle Scholar
Wickens M (2014) How useful are DSGE macroeconomic models for forecasting? Open Econ Rev 25(1):171–193CrossRefGoogle Scholar
© Società Italiana degli Economisti (Italian Economic Association) 2015
2.CEPRLondonUK
3.Cardiff Metropolitan UniversityCardiffUK
Minford, P., Xu, Y. & Zhou, P. Ital Econ J (2015) 1: 333. https://doi.org/10.1007/s40797-015-0020-9
Accepted 07 July 2015 | CommonCrawl |
JulianAngussmith
Mathematics 28 28 1010 bronze badges
Biology 16 16 11 bronze badge
Earth Science 8 8 22 bronze badges
Chemistry 1 1
English Language & Usage 1 1
complex-analysis
proof-verification
ordinary-differential-equations
fourier-analysis
5 Prove that $2z^4-3z^3+3z^2-z+1=0$ has exactly one complex root in each of the four quadrants. Oct 23 '18
5 Using Separation of Variables to Solve a Laplace Eigenproblem Oct 28 '18
3 Finding an Expression as an Elementary Function for a Power Series Oct 22 '18
2 Finding How Many Roots $2z^4-3z^3+3z^2-z+1=0$ has in the First Quadrant Oct 23 '18
2 Fourier-Bessel Series for $f(x)=1-x$ Oct 5 '18
2 Where is $k(z)=PV(z-1)^{\frac{1}{2}}PV(z+1)^{\frac{1}{2}}$ Continuous and Differentiable Oct 30 '18
2 Sturm-Liouville Form (e.g. Bessel Equation) Oct 31 '18
2 Verification of Laurent series for the function $f(z)=\frac{2}{(z+2)^2}-\frac{5}{z-4}$ Nov 17 '18
1 Using Parseval's identity to show that $\frac{\pi^2}{8}=1+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+…$ Nov 15 '18
1 Showing that $2z^4-3z^3+3z^2-z+1=0$ has no Real Roots Oct 23 '18 | CommonCrawl |
The Effect of Identification Framing as Crisis Response Strategy
위기대응 전략으로서 정체성 프레이밍 효과
Cho, Seung-Ho
조승호 (숭실대학교 글로벌통상학과)
Received : 2017.12.20
Accepted : 2018.01.19
The current study challenges to suggest an umbrella strategy applied to different type of crisis, which is different from normative principle in crisis communication. The umbrella or comprehensive strategy in this study is identification framing. Identification framing is strategic message for organizational identification, which is close to social identification. The current study employed experimental design manipulating crisis types, crisis response types, and identification framing. The crisis types were internal versus external crisis, crisis responses were denial versus apology, and using identification framing $2{\times}2{\times}2$ factorial design were used. Two hundreds forty students participated in the experiment. The result showed the significant effectiveness of identification framing in different crisis types and crisis responses.
Identification Framing;Crisis Communication;Crisis Response Strategy;Crisis Type
Supported by : 한국연구재단
W. Timothy Coombs, "The value of communication during a crisis: Insights from strategic communication research," Business Horizons, Vol.58, No.2, pp.141-148, 2015. https://doi.org/10.1016/j.bushor.2014.10.003
William L. Benoit, "Image repair discourse and crisis communication," Public relations review Vol.23, No.2, pp.177-186, 1997. https://doi.org/10.1016/S0363-8111(97)90023-0
W. Timothy Coombs, "Protecting organization reputations during a crisis: The development and application of situational crisis communication theory," Corporate reputation review, Vol.10, No.3, pp.163-176, 2007. https://doi.org/10.1057/palgrave.crr.1550049
W. Timothy Coombs and J. Holladay Sherry, "Helping crisis managers protect reputational assets: Initial tests of the situational crisis communication theory," Management Communication Quarterly, Vol.16, No.2, pp.165-186, 2002. https://doi.org/10.1177/089331802237233
Jeffrey L. Bradford and Dennis E. Garrett, "The effectiveness of corporate communicative responses to accusations of unethical behavior," Journal of Business ethics, Vol.14, No.11, pp.875-892, 1995. https://doi.org/10.1007/BF00882067
Kevin M. Coombs, "Quantitative proteomics of complex mixtures," Expert Review of Proteomics, Vol.8, No.5, pp.659-677, 2011. https://doi.org/10.1586/epr.11.55
Daviden L. Sturges, "Communicating through crisis: A strategy for organizational survival," Management communication quarterly, Vol.7, No.3, pp.297-316, 1994. https://doi.org/10.1177/0893318994007003004
K. Hallahan, "Seven models of framing: Implications for public relations," Journal of public relations research, Vol.11, No.3, pp.205-242, 1999. https://doi.org/10.1207/s1532754xjprr1103_02
G. Bateson, Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology, University of Chicago Press, 1972.
E. Goffman, Frame analysis: An essay on the organization of experience, Harvard University Press, 1974.
Linda L. Putnam and Majia Holmer, "Framing, reframing, and issue development," Sage Publication, 1992.
D. Tannen, "What's in a frame? Surface evidence for underlying expectations," Framing in discourse, Vol.14, p.56, 1993.
S. Ghanem, "Filling in the tapestry: The second level of agenda setting," Communication and democracy: Exploring the intellectual frontiers in agenda-setting theory, pp.3-14, 1997.
Irwin P. Levin, Sandra L. Schneider, and Gary J. Gaeth, "All frames are not created equal: A typology and critical analysis of framing effects," Organizational behavior and human decision processes, Vol.76, No.2, pp.149-188, 1998. https://doi.org/10.1006/obhd.1998.2804
Al Ries and Jack Trout, "Positioning: The battle for your mind," McGraw-Hill, 1986.
B. Weiner, "An attributional theory of achievement motivation and emotion," Psychological review, Vol.92, No.4, p.548, 1985. https://doi.org/10.1037/0033-295X.92.4.548
B. Weiner, Social motivation, justice, and the moral emotions: An attributional approach, Psychology Press, 2006.
K. Murphy, "A brief introduction to graphical models and Bayesian networks," 1998.
T. Skillington, "Politics and the struggle to define: A discourse analysis of the framing strategies of competing actors in a 'new' participatory forum," British Journal of Sociology, pp.493-513, 1997.
F. Heider, "A conversation with Fritz Heider," New directions in attribution research, Vol.1, pp.47-61, 1976.
Alan L. Sillars, "Attribution and communication: Are people" naive scientists" or just naive," Social cognition and communication, pp.73-106, 1982.
Catherine S. Elliott and Donald M. Hayward, "The expanding definition of framing and its particular impact on economic experimentation," The Journal of Socio-Economics, Vol.27, No.2, pp.229-243, 1998.
H. J. Kim and Glen T. Cameron, "Emotions matter in crisis: The role of anger and sadness in the publics' response to crisis news framing and corporate crisis response," Communication Research, Vol.38, No.6, pp.826-855, 2011. https://doi.org/10.1177/0093650210385813
Tajfel, Henri, Human groups and social categories: Studies in social psychology, CUP Archive, 1981.
H. Tajfel, "Intergroup relations, social myths and social justice in social psychology," The social dimension, Vol.2, pp.695-715, 1984.
Victor Witter Turner, From ritual to theatre: The human seriousness of play, Paj Publications, 1982.
Bryan S. Turner, Body and society, John Wiley & Sons, Ltd, 1984.
G. Cheney and Phillip K. Tompkins, "Coming to terms with organizational identification and commitment," Communication Studies, Vol.38, No.1, pp.1-15, 1987.
W. Timothy Coombs, "An analytic framework for crisis situations: Better responses from a better understanding of the situation," Journal of public relations research, Vol.10, No.3, pp.177-191, 1998. https://doi.org/10.1207/s1532754xjprr1003_02
W. Timothy Coombs and Sherry J. Holladay, "An exploratory study of stakeholder emotions: Affect and crises," The effect of affect in organizational settings. Emerald Group Publishing Limited, pp.263-280, 2005. | CommonCrawl |
Springer Proceedings in Mathematics & Statistics
Knots, Low-Dimensional Topology and Applications
Knots in Hellas, International Olympic Academy, Greece, July 2016
Editors: Adams, C.C., Gordon, C.M., Jones, V., Kauffman, L.H., Lambropoulou, S., Millett, K.C., Przytycki, J.H., Ricca, R.L., Sazdanovic, R. (Eds.)
Collection of high-quality, state-of-the-art research and survey articles
Top researchers, including Fields Medal winner like Vaughan Jones
Research in new directions, new tools and methods
This proceedings volume presents a diverse collection of high-quality, state-of-the-art research and survey articles written by top experts in low-dimensional topology and its applications.
The focal topics include the wide range of historical and contemporary invariants of knots and links and related topics such as three- and four-dimensional manifolds, braids, virtual knot theory, quantum invariants, braids, skein modules and knot algebras, link homology, quandles and their homology; hyperbolic knots and geometric structures of three-dimensional manifolds; the mechanism of topological surgery in physical processes, knots in Nature in the sense of physical knots with applications to polymers, DNA enzyme mechanisms, and protein structure and function.
The contents is based on contributions presented at the International Conference on Knots, Low-Dimensional Topology and Applications – Knots in Hellas 2016, which was held at the International Olympic Academy in Greece in July 2016. The goal of the international conference was to promote the exchange of methods and ideas across disciplines and generations, from graduate students to senior researchers, and to explore fundamental research problems in the broad fields of knot theory and low-dimensional topology.
This book will benefit all researchers who wish to take their research in new directions, to learn about new tools and methods, and to discover relevant and recent literature for future study.
A Survey of Hyperbolic Knot Theory
Futer, David (et al.)
Spanning Surfaces for Hyperbolic Knots in the 3-Sphere
Adams, Colin C.
On the Construction of Knots and Links from Thompson's Groups
Jones, Vaughan F. R.
Virtual Knot Theory and Virtual Knot Cobordism
Kauffman, Louis H.
Knot Theory: From Fox 3-Colorings of Links to Yang–Baxter Homology and Khovanov Homology
Przytycki, Józef H.
Algebraic and Computational Aspects of Quandle 2-Cocycle Invariant
Clark, W. Edwin (et al.)
A Survey of Quantum Enhancements
Nelson, Sam
From Alternating to Quasi-Alternating Links
Chbili, Nafaa
Hoste's Conjecture and Roots of the Alexander Polynomial
Stoimenov, Alexander
A Survey of Grid Diagrams and a Proof of Alexander's Theorem
Scherich, Nancy C.
Extending the Classical Skein
Kauffman, Louis H. (et al.)
From the Framisation of the Temperley–Lieb Algebra to the Jones Polynomial: An Algebraic Approach
Chlouveraki, Maria
A Note on $$\mathfrak {gl}_{m|n}$$ Link Invariants and the HOMFLY–PT Polynomial
Queffelec, Hoel (et al.)
On the Geometry of Some Braid Group Representations
Spera, Mauro
Towards a Version of Markov's Theorem for Ribbon Torus-Links in $$\mathbb {R}^4$$
Damiani, Celeste
An Alternative Basis for the Kauffman Bracket Skein Module of the Solid Torus via Braids
Diamantis, Ioannis
Knot Invariants in Lens Spaces
Gabrovšek, Boštjan (et al.)
Identity Theorem for Pro-p-groups
Mikhovich, Andrey M.
A Survey on Knotoids, Braidoids and Their Applications
Gügümcü, Neslihan (et al.)
Regulation of DNA Topology by Topoisomerases: Mathematics at the Molecular Level
Ashley, Rachel E. (et al.)
Topological Entanglement and Its Relation to Polymer Material Properties
Panagiotou, Eleni
Topological Surgery in the Small and in the Large
Antoniou, Stathis (et al.)
Colin C. Adams
Cameron McA. Gordon
Vaughan Jones
Louis H. Kauffman
Sofia Lambropoulou
Kenneth C. Millett
Józef H. Przytycki
Renzo L. Ricca
Radmila Sazdanovic
263 b/w illustrations, 56 illustrations in colour | CommonCrawl |
Binary phase hopping based spreading code authentication technique
Shenran Wang1,
Hao Liu1,
Zuping Tang ORCID: orcid.org/0000-0002-7332-55221 &
Bin Ye1
Satellite Navigation volume 2, Article number: 4 (2021) Cite this article
Civil receivers of Global Navigation Satellite System (GNSS) are vulnerable to spoofing and jamming attacks due to their signal structures. The Spreading Code Authentication (SCA) technique is one of the GNSS message encryption identity authentication techniques. Its robustness and complexity are in between Navigation Message Authentication (NMA) and Navigation Message Encryption (NME)/Spreading Code Encryption (SCE). A commonly used spreading code authentication technique inserts unpredictable chips into the public spreading code. This method changes the signal structure, degrades the correlation of the spreading code, and causes performance loss. This paper proposes a binary phase hopping based spreading code authentication technique, which can achieve identity authentication without changing the existing signal structure. Analysis shows that this method can reduce the performance loss of the original signal and has good compatibility with the existing receiver architecture.
Global Navigation Satellite System (GNSS) is an important national infrastructure, which plays a key role in vehicle navigation, civil aviation, financial transactions and many others (Liang et al. 2013). GNSS civil receivers are vulnerable to spoofing and jamming attacks because the format and modulation of GNSS civil signals are public ("GPS Interface Control Documents IS-GPS-200G" 2012; Humphreys 2013), and there exist obvious security vulnerabilities (Guenther 2014). Deception jamming is divided into repeater deception jamming and generated spoofing jamming (Hu et al. 2016). It is of great significance to study the anti-deception technology and improve the robustness of receivers. GNSS anti-spoofing technology is categorized into non-encryption-based technology and encryption-based technology (Psiaki and Humphreys 2016). The non-encryption-based technology mainly includes signal quality monitoring, doppler consistency monitoring and other anti-spoofing technologies. The encryption-based technology includes Navigation Message Authentication (NMA), Spreading Code Authentication (SCA), Navigation Message Encryption (NME) and Spreading Code Encryption (SCE) (Dovis 2015; Shen and Guo 2018a). Anti-spoofing technology can greatly enhance the security of information (Wesson et al. 2012).
The SCA technique is considered to be one of the key innovations for the next generation of GNSS civil signals (Margaria et al. 2017). Its robustness and complexity are in between NMA and NME/SCE. For the SCA technique unpredictable chips are inserted into the unencrypted public spreading code and verified in receivers to ensure the credibility of pseudo range measurement (Shen and Guo 2018a; b). At present, the main implementation methods of the SCA technique include Spread Spectrum Security Code (SSSC) (Scott 2003), Hidden Marker (HM) (Kuhn 2005) and Signal Authentication Sequence (SAS) (Pozzobon et al. 2011; Pozzobon 2011). The ideas adopted at the signal level are inserting unpredictable authentication chips into the public spreading code. The advantage of the SCA technique is that the received power is − 160 dB·W. Unless the encrypted information is available, it is difficult for attackers to predict the SCA chips correctly. The disadvantage is that the output of the correlator will be greatly attenuated as the proportion of the SCA chips in the code sequence increases, resulting in the failure of acquisition and tracking, for receivers do not participate in identity authentication (Pozzobon 2011). When the proportion of the inserted chips is small, the signal is vulnerable to multiple access interference. The adjustment of time, position and scale of chip insertion is not flexible.
In view of the above problems, this paper proposes a binary phase hopping based SCA technique. The proposed technique avoids non-cooperative parties to obtain information, and improves the signal confidentiality performance. Phase hopping modulation can be in multi-ary, and the proposed technique uses binary phase hopping. By adding pseudo-random phase hop into the civil signal, and correlating demodulation results with pseudo-random code in the receiver, we can achieve identity authentication. This technique can reduce the performance loss of the original signal and the impact on the receivers, which do not participate in authentication. Besides, it has good compatibility with the existing receiver architecture. This technique also has stronger anti-multiple access interference ability and higher authentication success rate. Moreover, it is more flexible because the transmitter can adjust the ratio of authentication component flexibly, and the receiver can also choose a flexible receiving mode. This SCA method provides a good technical solution for the design of modern GNSS signals.
Phase hopping modulation
Phase hopping modulation is a new anti-interception method for improving the security and reliability of a system. Its aim is to improve the security performance of a wireless communication system without increasing the system bandwidth.
Phase hopping modulation is suitable for a variety of signals, such as baseband signal, Radio Frequency (RF) signal, and carrier. This modulation can also be regarded as a secondary modulation after the basic modulation, including Phase Shift Keying (PSK) modulation, Quadrature Amplitude Modulation (QAM), etc. The phase hopping sequence generator generates a phase hopping sequence to control the phase shifter, so the initial phase of the input signal changes with the hopping of the phase hopping sequence. Then the output signal can be processed according to different requirements and transmitted by the antenna. For the demodulation unit in the receiver, the same phase sequence generator generates the phase hopping sequence and controls the phase compensator to compensate the signal phase so as to achieve demodulation. The phase compensator is implemented by a phase shifter, which makes the phase of the input signal change with the hopping sequence. These two hopping procedures are complementary, which is essential for the signal synchronization.
Phase hopping modulation unit
The phase hopping modulation unit is shown in Fig. 1.
The phase hopping sequence generator generates N-ary pseudo-random sequence \(c(k)\), which is used as the phase hopping sequence, and the corresponding phase offset is
$$\varphi (k) = 2{\uppi }\frac{c(k)}{N}$$
The output signal \(T_{{{\text{out}}}} (t)\) is
$$T_{{{\text{out}}}} (t) = T_{{{\text{in}}}} (t)e^{j\varphi (t)}$$
where \(e^{j\varphi (t)}\) is phase shift factor. The relationship between \(t\) and \(k\) is
$$k = \left\lfloor {t/T_{c} } \right\rfloor$$
where \(T_{c}\) is the chip width of the phase hopping sequence.
Phase hopping demodulation unit
The phase hopping demodulation unit is shown in Fig. 2.
Under the control of a synchronous system, the phase hopping sequence generator generates the same pseudo-random sequence \(c(k)\). The output signal \(R_{{{\text{out}}}} (t)\) is
$$R_{{{\text{out}}}} (t) = R_{{{\text{in}}}} (t)e^{ - j\varphi (t)}$$
where \(e^{ - j\varphi (t)}\) is phase compensation factor.
Binary phase hopping based SCA technique
The commonly used SCA technique inserts unpredictable authentication chips into the public spreading code. This paper proposes an SCA technique that modulates authentication information on the signal phase.
Signal structure
The phase hopping sequence \(c(k)\) is binary and its value is given by
$$c(k) \in \{ - 1,1\}$$
The corresponding phase offset is
$$\varphi (k) \in \{ - \varphi_{{{\text{PH}}}} ,\varphi_{{{\text{PH}}}} \}$$
where \(\varphi_{{{\text{PH}}}}\) is the phase hopping amplitude.
Assuming that there are two GNSS signal components, and they are compounded together, such as Global Positioning System (GPS) L5, Galileo Navigation Satellite System (Galileo) E5a, BeiDou Navigation Satellite System (BDS) B2a, using the Quadrature Phase Shift Keying (QPSK) modulation. The baseband equivalent expression of the phase hopping modulation unit is
$$T_{{{\text{in}}}} (t) = d(t)c_{d} (t) + jc_{p} (t)$$
where \(d(t)\) is the data bits, \(c_{d} (t)\) is the spreading code of the data channel (I channel), \(c_{p} (t)\) is the spreading code of the pilot channel (Q channel). The output signal of the phase hopping modulation unit is
$$\begin{aligned} T_{{{\text{out}}}} (t) & = T_{{{\text{in}}}} (t)e^{j\varphi (t)} \\ & = \left[ {d(t)c_{d} (t)\cos \varphi (t) - c_{p} (t)\sin \varphi (t)} \right] \\ & \quad + \,j\left[ {d(t)c_{d} (t)\sin \varphi (t) + c_{p} (t)\cos \varphi (t)} \right] \\ \end{aligned}$$
$$\begin{gathered} I_{{{\text{out}}}} = d(t)c_{d} (t)\cos \varphi (t) - c_{p} (t)\sin \varphi (t) \hfill \\ Q_{{{\text{out}}}} = d(t)c_{d} (t)\sin \varphi (t) + c_{p} (t)\cos \varphi (t) \hfill \\ \end{gathered}$$
and the RF signal is
$$\begin{aligned} s_{{{\text{PH}}}} (t) & = \sqrt {2P_{1} } \left[ {d(t)c_{d} (t)\cos \varphi (t) - c_{p} (t)\sin \varphi (t)} \right]\cos (\omega_{c} t + \varphi_{0} ) \\ & \quad - \sqrt {2P_{2} } \left[ {d(t)c_{d} (t)\sin \varphi (t) + c_{p} (t)\cos \varphi (t)} \right]\sin (\omega_{c} t + \varphi_{0} ) \\ & { = }\sqrt {2P_{1} } I_{{{\text{out}}}} \cos (\omega_{c} t + \varphi_{0} ) - \sqrt {2P_{2} } Q_{{{\text{out}}}} \sin (\omega_{c} t + \varphi_{0} ) \\ \end{aligned}$$
where \(P_{1}\) is the power of the data channel, \(\omega_{c}\) is carrier frequency, \(\varphi_{0}\) is carrier initial phase, \(P_{2}\) is the power of the pilot channel. Figure 3 shows the constellation of the output signal, where \(P_{1} = P_{2}\), and \(\varphi_{PH} = 5^{ \circ }\).
Binary phase hopping modulation constellation
SCA at receiver end
In the user segment, it is easy for a receiver to achieve authentication, and there is no need to make massive changes to the existing receiver. The process is as follows.
After the down conversion, the Intermediate Frequency (IF) signal obtained from the receiver is
$$s_{{{\text{IF}}}} (t) = \sqrt {2P_{r1} } I_{{{\text{out}}}} \cos (\omega_{i} t + \varphi_{i} ) - \sqrt {2P_{r2} } Q_{{{\text{out}}}} \sin (\omega_{i} t + \varphi_{i} ){ + }n$$
where \(P_{r1}\) is the data channel power, \(P_{r2}\) is the pilot channel power, \(\omega_{i}\) is the IF carrier frequency, \(\varphi_{i}\) is the IF carrier phase, and \(n\) is noise.
The identity authentication relies on the \(\sin \varphi (t)\), which can be implemented in the following three ways.
Only pilot channel used for authentication
The schematic diagram is shown in Fig. 4. The dashed box in the figure is the identity authentication module, and the rest is the traditional tracking loop. After mixing the IF signal with the locally generated carriers, the high-order components are filtered out. When the tracking loop is stable, the I and Q channel signals are (assuming there are no frequency difference and initial phase difference between the received IF signal and the replicated signal)
$$\begin{aligned} i_{p} (t) & = s_{{{\text{IF}}}} (t)\sqrt 2 \cos (\omega_{o} t + \varphi_{o,p} ) \\ \, & = \sqrt {P_{r2} } Q_{{{\text{out}}}} + n_{i,p} + \cdots \\ q_{p} (t) & = s_{{{\text{IF}}}} (t)\sqrt 2 \sin (\omega_{o} t + \varphi_{o,p} ) \\ & = \sqrt {P_{r1} } I_{{{\text{out}}}} + n_{q,p} + \cdots \\ \end{aligned}$$
where \(\omega_{o}\) is the frequency of the local carrier, \(\varphi_{o,p}\) is the initial phase of the local carrier, and \(n_{i,p}\), \(n_{q,p}\) are the noises of the I and Q channels, respectively.
Schematic diagram of using pilot-channel signal authentication (NCO, Numerically Controlled Oscillator)
Equation (9) tells that the authentication is not affected by the data bits. First, the Q channel signal in Eq. (12) is correlated and integrated with the pilot channel spreading code \(c_{p}\) and the phase hopping sequence \(c(k)\). The higher-order components will be cleared after the filter in the authentication module. Then, to further improve \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\), a coherent accumulation for the length of \(T_{{{\text{coh}}}}\) is carried out, and the normalized detection value \(V_{i}\) is
$$V_{i} = - \sqrt {P_{r1} } (\varphi_{{{\text{PH}}}} \cdot {\uppi }/180)$$
when \(\theta\) is small, \(\sin \theta \approx \theta\). The threshold value \(V_{t}\) is
$$V_{t} = \sigma_{n} \sqrt { - 2\ln P_{{{\text{fa}}}} }$$
where \(\sigma_{n}\) is the standard deviation of the noise, and \(P_{{{\text{fa}}}}\) is false alarm probability. If \(V_{i}\) is higher than \(V_{t}\), the authentication successes, otherwise fails.
Only data channel used for authentication
$$\begin{aligned} i_{d} (t) & = s_{{{\text{IF}}}} (t)\sqrt 2 \cos (\omega_{o} t + \varphi_{o,d} ) \\ & = \sqrt {P_{r1} } I_{{{\text{out}}}} + n_{i,d} + \cdots \\ q_{d} (t) & = s_{{{\text{IF}}}} (t)\sqrt 2 \sin (\omega_{o} t + \varphi_{o,d} ) \\ & = - \sqrt {P_{r2} } Q_{{{\text{out}}}} + n_{q,d} + \cdots \\ \end{aligned}$$
where \(\omega_{o}\) is the frequency of the local carrier, \(\varphi_{o,d}\) is the initial phase of the local carrier, and \(n_{i,d}\), \(n_{q,d}\) are the noises of the I and Q channels, respectively.
Schematic diagram of using data-channel signal authentication
Equation (9) tells that to use the data channel for authentication, it is necessary to eliminate the influence of data bits. First, the Q channel signal in Eq. (15) is correlated and integrated with the data channel spreading code \(c_{d}\) and the phase hopping sequence \(c(k)\) respectively. The higher-order components will be cleared after the filter in the authentication module. Then the influence of data bit inversion is eliminated according to the data bit estimation of the I channel. Next, to further improve \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\), a coherent accumulation for the length of \(T_{{{\text{coh}}}}\) is carried out, and the normalized detection value \(V_{q}\) is
$$V_{q} = - \sqrt {P_{r2} } (\varphi_{{{\text{PH}}}} \cdot {\uppi }/180)$$
where \(\sigma_{n}\) is the standard deviation of the noise, and \(P_{{{\text{fa}}}}\) is false alarm probability. If \(V_{q}\) is higher than \(V_{t}\), the authentication successes, otherwise fails.
Both data and pilot channels used for authentication
When the receiver tracks pilot signal and data signal independently, the above two methods are directly combined to get the normalized detection value \(V_{{}}\) as
$$V = V_{i} + V_{q}$$
While if the receiver tracks pilot signal and data signal jointly, it is necessary to determine the phase relation between \(V_{i}\) and \(V_{q}\) according to the practical tracking loop and make a right combination.
To use the above three methods, we only need to add an identity authentication module in the classic tracking loop. Table 1 shows the increase in hardware complexity, which is mainly reflected in the number of code sequence generators and correlators.
Table 1 Implementation complexity
In order to verify the performance of the binary phase hopping based SCA technique, this paper simulates the performance loss and detection probability, then compares it with the inserting chip based SCA technique. It is assumed that the energy proportion of the authentication part is the same, i.e., \((\sin \varphi_{{{\text{PH}}}} )^{2}\).
Performance simulation of three authentication methods at receiver end
The simulation parameters are as follows: code rate \(R_{c} = 1.023{\text{ Mcps}}\), phase hopping amplitude \(\varphi_{{{\text{PH}}}} = 5^\circ\), \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) = 40 dB·Hz, coherent integration time 1000 ms, and false alarm probability \(10^{ - 4}\).
The simulation result is shown in Fig. 6. In the figure, "PN_I" represents the method of using the data channel for authentication, "PN_Q" represents the method of using the pilot channel for authentication. The detection probability curves of the two methods coincide. "Combination" represents the method of using both pilot channel and data channel for authentication. When both channels are used, the signal power is fully utilized, so its performance is optimal. The coherent accumulation time required to achieve the same detection probability is reduced by a half.
Performance simulation of authentication module
Performance loss of receivers not participating in authentication
For the existing civil receivers which do not include identity authentication module. The authentication component in the signal is regarded as noise, which will degrade \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\).
For the inserting chip based SCA technique, assuming the spreading code length is N, the length of the authentication codes is \(K\), signal amplitude is \(A\), and the power of noise is \(\sigma^{2}\), then the \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) of the non-authentication signal is \(\frac{{A^{2} }}{{2\sigma^{2} }} \cdot N\). For the receivers which do not participate in authentication, the \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) is \(\frac{{A^{2} }}{{2\sigma^{2} }} \cdot \frac{{(N - K)^{2} }}{N}\). So, the \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation is
$$\Delta {{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}} = 10\log_{10} (1 - p_{u} )^{2}$$
where \(p_{u}\) is the ratio of the authentication part in a signal, that is, the ratio of the unpredictable sequence inserted in the spreading code sequence.
For the binary phase hopping based SCA technique, the \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation is
$$\Delta {{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {{\text{N}}_{0} }}} \right. \kern-\nulldelimiterspace} {{\text{N}}_{0} }} = 10\log_{10} (1 - p_{u} )$$
where \(p_{u}\) is the ratio of the authentication part in a signal, and the relationship with the phase hopping amplitude is
$$p_{u} = \left( {\sin \varphi_{{{\text{PH}}}} } \right)^{2}$$
it is known that theoretically the \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation of the binary phase hopping based SCA technique is lower, which is a half of that for the inserting chip based SCA technique.
Figure 7 shows the simulation results of \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation of the two SCA techniques. The theoretical results coincide with the simulation results. The binary phase hopping based SCA technique has lower \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation and better compatibility with the existing receiver architecture.
\({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) degradation
Simulation of detection probability
This section simulates the Receiver Operating Characteristic (ROC) performance for the two SCA techniques, and the binary phase hopping based SCA technique uses the third authentication method. In the following figures the abscissa represents the false alarm probability \(P_{{{\text{fa}}}}\) and the ordinate is for the detection probability \(P_{{\text{d}}}\). The simulated ROC performances are plotted in Figs. 8 and 9 with the coherent integration time \(T_{{{\text{coh}}}} = 600{\text{ ms}}\), code rate \(R_{c} = 1.023{\text{ Mcps}}\), phase jump amplitude \(\varphi_{{{\text{PH}}}} = 5^\circ\), and \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) being 35 dB·Hz and 40 dB·Hz, respectively.
ROC performance (\({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) = 35 dB·Hz)
When \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) is 40 dB·Hz, the coherent integration time of 600 ms is long enough. There is no significant difference between the authentication success rates of the two SCA techniques. Under the same false alarm probability, the authentication success rates of the two SCA techniques are almost 100%. When \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) is reduced to 35 dB·Hz, the coherent integration time of 600 ms is not enough. Under the same false alarm probability, the authentication success rate of the phase hopping based SCA technique improves obviously. One reason is that in the same coherent integration time, the authentication code length of the phase hopping based SCA technique is \(R_{c} \cdot T_{{{\text{coh}}}}\), and the authentication code length of the inserting chip based SCA technique is \(R_{c} \cdot p_{u} \cdot T_{{{\text{coh}}}}\). For a GNSS signal, the longer the spreading code sequence is, the higher the spreading gain will be, meaning stronger anti-multiple access interference ability.
Considering the errors in PLL, such as phase jitter and dynamic stress error, Fig. 10 shows the constellation diagram of demodulation with the root-mean-square error (RMSE) of phase jitter \(\sigma_{i}\) being 5°. The phases of actual signal jitter are near the ideal eight phase points. Figure 11 shows the constellation diagram of demodulation with the steady-state value of dynamic stress error \(\theta_{e}\) being 5°. There is a fixed deviation between the phase points of the actual signal and the eight ideal phase points.
Binary phase hopping modulation constellation (\(\sigma_{i}\) = 5°)
Binary phase-hopping modulation constellation (\(\theta_{e}\) = 5°)
Figure 12 shows the ROC performance of authentication module for the cases that the root-mean-square error of phase jitter is 5° and 10°, the steady value of dynamic stress error is 5° and 10°, and \({{\text{C}} \mathord{\left/ {\vphantom {{\text{C}} {\text{N}}}} \right. \kern-\nulldelimiterspace} {\text{N}}}_{{0}}\) is 35 dB·Hz. Compared with the ideal (i.e., without error), the dynamic stress error hardly affects the ROC performance of authentication module, while the phase jitter does, but not much deteriorate the ROC performance.
Flexibility analysis
The premise of successful authentication is the correct detection of the authentication code, which requires low false alarm probability and high detection probability. The success rate of authentication is related to the power and time of authentication signal. When the total power of the signal is constant, the higher the power proportion of the authentication signal, the shorter the necessary time for successful authentication will be, otherwise the longer the authentication time should be adopted. Therefore, there is a need for a tradeoff between the authentication component power proportion and the real-time authentication, which can be adjusted if necessary.
For the inserting chip based SCA technique, if we want to change the percentage of unpredictable sequence inserted in the spreading code sequence, the strategies of generating spreading sequence on the satellite and the receiver processing spreading code sequence need to be adjusted. The insert position and time need to update, and the transmission and synchronization of these updated information also need additional resources, which is less flexible.
For the binary phase hopping based SCA technique, to change the energy proportion of the authentication part in the signal, only the phase hopping amplitude needs to be changed. The receiver does not need to change the receiving mode and processing strategy, which has high implementation flexibility.
Applicability analysis
The modulation mode adopted in the simulation is Direct Sequence Spread Spectrum (DSSS)/QPSK, and code rate is 1.023 Mcps. In the design of a modern GNSS signal structure, subcarrier modulation and higher code rate are also used for some signals. Compared with the proposed modulation method, the difference of the subcarrier modulation process is that it adds a subcarrier modulation module before the carrier modulation. The corresponding demodulation in the receiver does not affect the constellation diagram of the signal, which means it does not affect the receiver authentication module. At the same time, higher code rate will bring higher spreading gain, which can also improve the performance of the receiver authentication module. Therefore, the proposed scheme is suitable for modern GNSS signals.
In this paper, a new SCA technique of the binary phase hopping based SCA technique is proposed. The performance of this technique is compared with the inserting chip based SCA technique through a simulation. In terms of compatibility, the proposed technique is more compatible with the existing receiver architecture, and also reduces the impact on the receivers that do not participate in identity authentication. In terms of authentication success rate, the binary phase hopping based SCA technique has stronger anti-multiple access interference ability and higher authentication success rate in the same condition. In terms of flexibility, the binary phase hopping based SCA technique is more flexible and easier to adjust. The binary phase hopping based SCA technique provides an efficient implementation scheme for future GNSS security design.
Data sharing is applicable to this article.
Dovis, F. (2015). GNSS interference, threats, and countermeasures. Radar receivers.
GPS Interface Control Documents IS-GPS-200G. (2012). Retrieved from http://www.gps.gov/technical/icwg/
Guenther, C. (2014). A survey of spoofing and counter-measures. Navigation, 61(3), 159–177.
Hu, Y., Bian, S., Ji, B., & Li, H. (2016). Discussions of satellite navigation countermeasures on spoofing and anti-spoofing techniques. In 2016 China Satellite Navigation Conference (CSNC).
Humphreys, T. E. (2013). Detection strategy for cryptographic GNSS anti-spoofing. IEEE Transactions on Aerospace Electronic Systems, 49(2), 1073–1090.
Kuhn, M. G. (2005). An asymmetric security mechanism for navigation signals (pp. 239–252). Heidelberg: Springer.
Liang, H., Daniel, B. W., & Gao, X. (2013). Cooperative GNSS authentication reliability from unreliable peers. Inside GNSS, pp. 70–75.
Margaria, D., Motella, B., Anghileri, M., Floch, J. J., Fernandez-Hernandez, I., & Paonni, M. (2017). Signal structure-based authentication for civil GNSSs: Recent solutions and perspectives. IEEE Signal Processing Magazine, 34(5), 27–37.
Pozzobon, O. (2011). Keeping the spoofs out: Signal authentication services for future GNSS. Inside GNSS, 6(3), 48–55.
Pozzobon, O., Canzian, L., Danieletto, M., & Chiara, A. D. (2011). Anti-spoofing and open GNSS signal authentication with signal authentication sequences. In 2010 5th ESA Workshop on Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing (NAVITEC). IEEE.
Psiaki, M. L., & Humphreys, T. E. (2016). GNSS spoofing and detection. Proceedings of the IEEE, 104(6), 1258–1270.
Scott, L. (2003). Anti-spoofing & authenticated signal architectures for civil navigation systems. Paper presented at the Ion Gps.
Shen, C., & Guo, C. (2018b). Research on structure-based authentication approaches for civil GNSS signal. In Proceedings of the 9th China Satellite Navigation Conference—S03 Satellite Navigation Signal and Anti-interference Technology.
Shen, C., & Guo, C. (2018). Study and evaluation of GNSS signal cryptographic authentication defense. GNSS World of China, 43(3), 7–12.
Wesson, K., Rothlisberger, M., & Humphreys, T. (2012). Practical cryptographic civil GPS signal authentication. Navigation, 59(3), 177–193.
This study is supported by Key-Area Research and Development Program of Guangdong Province (Grant No. 2019B010158001).
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
Shenran Wang, Hao Liu, Zuping Tang & Bin Ye
Shenran Wang
Hao Liu
Zuping Tang
Bin Ye
SW and HL accomplish thesis writing, simulation and modification; ZT proposed the idea of this paper; BY assisted in carrying out the simulation. All authors read and approved the final manuscript.
Correspondence to Zuping Tang.
Wang, S., Liu, H., Tang, Z. et al. Binary phase hopping based spreading code authentication technique. Satell Navig 2, 4 (2021). https://doi.org/10.1186/s43020-021-00037-z
Global navigation satellite system
Spreading code authentication
Binary phase hopping | CommonCrawl |
1988, Volume 303, Number 3
Orthogonal projectors and the solution of the Riccati matrix algebraic equation
F. A. Aliev, B. A. Bordyug, V. B. Larin 521
New results on the localization of solutions of nonlinear elliptic and parabolic equations that are obtained by the energy method
S. N. Antontsev, Kh. I. Dias 524
On the use of the variational principle of invariance in numerical methods of optimal control in the presence of terminal constraints
O. V. Balabanov, V. T. Pashintsev 529
Features of the percolation model of polymer aging
R. P. Braginskii, B. V. Gnedenko, V. V. Malunov, Yu. V. Moiseev, S. A. Molchanov 535
Extremal homogeneous hypergraphs. An estimate for the Zarankiewicz function, and a criterion for the nondegeneracy of the function $\mathrm{ex}$
V. A. Gurvich 538
Hecke symmetries and quantum determinants
D. I. Gurevich 542
Singularly perturbed systems of ordinary differential equations in cases when "degenerate" systems have discontinuous solutions
M. Imanaliev, P. S. Pankov 546
Packings and coverings of the Hamming space by unit balls
G. A. Kabatiansky, V. I. Panchenko 550
Invariant decision-making functions
D. V. Kochetkov 552
A nonlinear generalized Cauchy–Riemann system with two independent complex variables
L. G. Mikhailov 556
Denseness of the functions of compact support in weighted spaces, and weighted inequalities
R. Oinarov 559
Special infinitesimal bendings of conical surfaces
A. V. Pogorelov 563
CYBERNETICS AND THE REGULATION THEORY
Locally stationary models of nonequilibrium dynamics of macrosystems with self-reproduction
Yu. S. Popkov 567
Vibrations of a horizontal rotor inelastic supports with clearances
A. S. Kel'zon, A. A. Koval' 570
Higher approximations of an asymptotic method for solving a system of Boltzmann kinetic equations
V. V. Struminskii, V. E. Turkov 574
On long-time electrical anomalies arising at the rock destruction
M. Ya. Batbachan 579
A method of inverse problem in the restoration of characteristics of light scattering by dispersion media
V. D. Bushuev, I. È. Naats 583
On the rate of spreading in the Tyrrhenian Sea
E. V. Verzhbitskii, I. M. Sborshchikov 586
Periodic variations of ozone and incident solar radiation in the atmospheric near-surface layer
L. S. Ivlev, O. V. Maksimenko, V. G. Sirota 589
On the dispersive composition of atmospheric aerosols and on the calculation of their precipitation
K. Ya. Kondrat'ev, V. M. Khvat, V. M. Moskovkin, M. B. Manuylov 591
On the coherent summation of scattered and diffraction fields in the problems of light scattering by large crystals
A. A. Popov 594
Interaction between laser radiation and diamond films
V. P. Ageev, L. L. Builov, V. I. Konov, A. V. Kuzmichev, S. M. Pimenov, A. M. Prokhorov, V. G. Ral'chenko, B. V. Spitsyn, N. I. Chapliev 598
Modified cylindrical functions for the numerical solution of electrodynamics problems in the cylindrical coordinate system
A. A. Vorontsov, S. D. Mirovitskaya 602
Dynamics of the pion degrees of freedom in the collisions of nuclei
D. N. Voskresenskii, A. V. Senatorov 606
On the nature of a chlorophyll luminescence intensity low-temperature band in solutions
V. F. Gachkovskii 611
Experimental evidence of the influence of a nonuniform magnetic field on the value of the coercivity field of an isolated domain wall
A. N. Grigorenko, S. A. Mishin, A. M. Prokhorov, E. G. Rudashevskiĭ 615
On self-switching of nonidentical unidirectional distributedly-coupled waves fed into a system
A. A. Mayer 618 | CommonCrawl |
Empirical Research in Vocational Education and Training
A comparative analysis of the OECD/INFE financial knowledge assessment using the Rasch model
Bernadene de Clercq ORCID: orcid.org/0000-0003-4314-22551
Empirical Research in Vocational Education and Training volume 11, Article number: 8 (2019) Cite this article
Based on Item Response Theory, and more specifically the Rasch model, the financial knowledge domain included in the OECD/INFE adult financial literacy assessment conducted in 2015 was evaluated. This was done in order to determine whether the measurement instrument, in its existing design, could be classified as an International Large-scale assessment (ILSA), suitable for within countries and for comparison across countries. The development cycle of the OECD/INFE assessment was briefly presented to portray the conditions necessary to ensure that successful measurement would lead to action. Based on the first phase of the analysis, the suitability of the data for the Rasch model was established and the applicability of the instrument to country-specific analysis was confirmed. However, the differential item function (DIF) exploration determined that the assumption that item difficulties are homogeneous across the various countries does not hold, therefore confirming the utility of this study. The results highlighted the greater risk associated with the traditional ranking of results rather than with sophisticated analyses, as traditional approaches could result in misdiagnosis of problem areas on instruments which might not be comparable across countries. Based on the results, it does not seem that the OECD/INFE adult financial knowledge assessment adhere to the requirements of being classified an ILSA.
The benefits of being financially literate are extensively reported on in academic and policy circles, with areas covered over the past few years including retirement planning (Alessie et al. 2011; Lusardi and Mitchell 2007), wealth creation (Zinni 2013) and inequality (Lusardi et al. 2017). The rise in inequality in particular is highlighted by the World Economic Forum in The Global Risks Report 2017, in which it is stated that "Growing income and wealth disparity is seen by respondents as the trend most likely to determine global developments over the next 10 years" (WEF 2017). In light of the evidence provided by Lusardi et al. (2017) that inadequate financial knowledge is a key determinant of wealth inequality, every effort should be made to ensure that consumers around the world achieve the optimal level of financial knowledge as a possible mechanism to reduce inequalities.
Since 2002, progress has been made in the measurement of financial literacy across a number of countries, culminating in the 2017 review focusing specifically on the G20 countries (OECD 2017). From a policy perspective, the 2017 review of financial literacy resulted in institutions such as the OECD foregrounding financial literacy's geo-political, cross-national as well as contextual importance within broader social concerns, such as global inequality. In 2010, the establishment of the OECD/International Network on Financial Education (OECD/INFE) was formalised (Atkinson 2011; Kempson 2009). According to the OECD (2016), concerted efforts to address the areas of financial education, financial consumer protection and, of increasing importance, financial inclusion are the three main initiatives needed to empower individuals and ensure overall stability of the world's financial systems. Leading organisation that undertake such interventions, which in this case is the adopted OECD/INFE instruments, however, need to ensure that the initiatives are trustworthy and credible, and do not exacerbate the very local and global pressures points that they seek to address. In an effort, therefore, to measure the level of financial literacy, both the OECD and the World Bank have in recent years embarked on diverse projects to measure issues of financial literacy in a comprehensive manner. The aim of these international measurements is to assist policymakers in identifying areas in relation to important pressure points and to identify vulnerable groups that require attention and focused interventions.
If financial literacy assessment is conducted by means of a suitable internationally comparable instrument, countries are able to benchmark themselves, identify common patterns and work together to find solutions to similar problems. However, given the heterogeneity and localised, diverse contexts of the respondents both across and within countries, it is vital that both local and international measurement instruments actually measure what they profess to measure, and that the results are reliable, valid and fully comparable. These methodological criteria are normatively and ethically expected of any credible study, more especially for international large-scale assessments studies (ILSA) that are widely administered to sample respondents, across the globe and the equality-inequality continuum. As stated by Kirsch et al. (2013), ILSA studies have "expanded in scope over time in response to increasing concern about the distribution of human capital and the growing recognition that skills contribute to the prosperity of nations and to better lives for individuals in those nations". Given the ethics, and dependence, of a variety of stakeholders on the information collected in these ILSAs, Lietz et al. (2017) suggest that these large-scale assessments should be robust and useful, of high quality, technically sound, have a comprehensive communication strategy and be useful for education policy.
The OECD/INFE has conducted two international large-scale assessments of adult financial literacy in 2011 and 2015 respectively. The third is in 2019. More specifics regarding the instrument design are discussed only briefly, in "Guiding principles for implementing an international large-scale assessment" section, in that the development of the instrument itself forms the context for the study, and not the unit of analysis. The extant assessment, however, that set out to measure financial knowledge (as a sub-component of financial literacy, see Fig. 1) and included eight determinant questions, is the focal unit of the study. Thirty countries participated in the assessment, totalling 51,650 respondents (OECD 2016). Unfortunately, not all of the data from all of the participating countries were available at the time of writing, but the analysis based on the data available from 11 of the 30 countries nevertheless, provides some seminal guiding results, which may be refined and extrapolated in future work, when the outstanding data from the remaining 19 countries are available.
The OECD/INFE adult financial literacy assessment framework. This figure illustrates the positioning of the financial knowledge domain amongst the other domains included in the OECD/INFE adult assessment
The OECD applied the Rasch model technique to the Programme for International Student Assessment (PISA) assessment. The Rasch model makes provision for the analysis of the respondents' responses to the set of items and compares the respondents' abilities with the difficulty of the question bank thereby inculpating the psychometric appropriateness of the financial knowledge assessment instrument. Capitalising on this existing work, the author sets out to extend the body of knowledge on the use of the Rasch as applied to ILSA, and therefore make an applied and theoretical contribution. The article reports on the first attempt to apply the Rasch model to the OECD/INFE adult financial literacy assessment for the purposes of assessing its validity as a comparative instrument for international application. The unit of analysis is the psychometric quality and comparability across countries of the OECD/INFE financial knowledge assessment questions, as opposed to instrument design. Given the heterogeneity of the respondents and local contexts both across, and within, the participating countries, this article outlines the validation of the measurement instrument of the financial knowledge domain in the OECD/INFE adult financial literacy assessment. This article will therefore extend the literature on the evaluation of international large-scale assessments with the application of psychometric tests (namely the Rasch) as an evaluation measurement. This article is, furthermore, in contrast to the traditional league tables and average scores applicable to Classical Test Theory (CTT), which has been found to have limitations (Kunovskaya et al. 2014). Given that another wave of the assessment of financial literacy across the OECD/INFE member countries is in 2019, the results gained through this novel lens could provide some suggestions for enhancement of the international large-scale assessment instrument.
This article's overarching aim, therefore, is an 'assessment' of an assessment, using the Rasch, as a novel means to assess, and the OECD/INFE financial knowledge assessment. The appropriateness of the Rasch model will be evaluated through the examination of the item fit statistics. By means of differential item functioning (DIF), further validity evidence of the comparable cross-country results is explored to determine to what extent the underlying OECD/INFE assessment framework of financial knowledge be confirmed with the limited set of seven questions. By means of IRT, and more specifically the Rasch model, the following research questions were explored:
To what extent is there evidence of the internal validity (e.g. reliability, item fit) of the OECD/INFE adult financial knowledge assessment?
Does the OECD/INFE adult financial knowledge assessment provide an invariant measure across adults in participating countries?
The remainder of the article is structured as follows: the first section will provide a synopsis on some guiding principles for the development of international large-scale assessments. The remaining sections will then discuss the methodology followed in preparation for the analysis of the international comparability, as is described in the analysis and results section. Lastly, a discussion of the results, and some limitations, of the study as well as recommendations for future assessments are provided.
Guiding principles for implementing an international large-scale assessment
Lietz et al. (2017) provide some insights to the key areas (or steps) that are required to be taken into account in the implementation of large-scale assessments to ensure the reliability and validity of the results obtained. Lietz et al. (2017)'s 13 key areas are in also line with the four steps as suggested by Kirsch et al. (2013) and these four steps will form the basis of the brief discussion of the development of the OECD/INFE financial literacy adult assessment in the remainder of this section.
Step 1: Policy questions
According to Kirsch et al. (2013), the first step in the development cycle of a large-scale assessment is usually motivated by policy questions to determine the objectives of the assessment: the "who" and "what" that is to be tested. Based on the generic nature of the policy objectives across the OECD/INFE member countries, a Financial Literacy Measurement Sub-group (hereafter referred to as the Measurement Sub-group) was established by the OECD/INFE tasked to develop and implement an internationally comparable survey to obtain data on financial literacy and capability (Kempson 2009).
Therefore, during the conceptualisation of the assessment, the OECD/INFE Measurement Sub-group debated on the target audience of the survey, given policy considerations. After reviewing several scenarios, the Measurement Sub-group recommended that all adults aged 18 and over, with no upper age limit should be included in the sample frame. However, as is common practice in national surveys, people living in residential institutions, such as care homes, hospitals or prisons were excluded as well as people living in extremely sparely populated areas. (Atkinson and Messy 2011; Kempson 2009; OECD 2015).
In terms of policy, terminology is always contentious. The Measurement Sub-group opted for the term financial literacy and to mean 'a combination of awareness, knowledge, skill, attitude and behaviour necessary to make sound financial decisions and ultimately achieve individual financial wellbeing.' (Atkinson and Messy 2012; OECD 2011, 2013, 2015).
The Measurement Sub-group thus determined 'who' and 'what' concerns for the assessment.
Step 2: Assessment frameworks and instrument design
To ensure the internationally comparability of an assessment, it is essential to have agreement on the concept to be measured. Agreement should also be achieved in the operationalisation of the concept through the development and application of a measurement instrument that provides fully cross-country comparable results (Kirsch et al. 2013; Lietz et al. 2017). Informed by the definition of financial literacy, the suggested assessment framework is illustrated in Figs. 3 and 4. This assessment framework was the result of various rounds of input from OECD/INFE members, international academics and experts from national statistical offices guided by established principles, which determined that the three overarching domains of financial knowledge, behaviour and attitudes should be the focus of the measurement instrument. In addition to the three financial literacy domains, the decision was made to include financial inclusion as well as socio-demographic information to address some broader policy objectives.
The Measurement Sub-group (OECD 2016) selected questions to operationalise the measure of 'financial knowledge' on their basis to assess different aspect of the basic knowledge that are widely considered to be useful to individual when making financial decisions. Some of these questions originated from the efforts of Lusardi and Mitchell (2009), Van Rooij et al. (2007, 2011). Aiming to measure financial literacy and assess its relationship with financial decision-making, Van Rooij et al. (2007) differentiated between basic financial knowledge and sophisticated financial knowledge as indicators of financial literacy. According to them, households display basic financial knowledge when they have some understanding of concepts such as interest compounding, inflation and the time value of money. In their measurement of financial literacy, sophisticated financial knowledge relates the households understanding of the difference between bonds and stocks, the relationship between bond prices and interest rates and the basics of diversification. As the measurement objective of the selected questions are to gain insights to households' understanding of basic financial concepts, this relates to Bloom's revised taxonomy (Krathwohl 2002) of the factual knowledge that is required from households as indicative of their acquaintance of the basic elements pertaining to financial decision-making. Although it might seem that these questions are not sufficient to measure the full financial knowledge domain, this issue will be returned to later during the assessment of the measurement instrument ability to be used as an ILSA. Suffice to state that both the questionnaire developers and the Measurement Group went to great lengths to ensure sound measurement instrument development practices were applied.
Step 3: Methodological advances
The OECD/INFE adult financial literacy assessment applied a CTT as their predominant measurement paradigm. It is imperative to ensure that the results provided are fair to participants across all countries based as evaluated by the psychometric qualities of the Rasch analysis. Based on the distribution results of the financial knowledge score obtained in the OECD/INFE assessments, the conclusion was reached that each question in the set of financial knowledge questions differentiated sufficiently between high and low achievers by a combination of easy and more difficult problems, providing a good level of discrimination (Atkinson and Messy 2011; OECD 2016). However, the authors do indicate that in the second assessment, for example, Hong Kong, Korea, the Netherlands and Norway have relatively large proportions answering all the questions correctly and suggest that more difficult questions could be considered in future to differentiate better in these countries (OECD 2016).
Step 4: Enhanced analysis and interpretation of data
The literature in assessing the measurement instruments themselves, specifically pertaining to financial literacy, is moving away from the CTT to incorporate more advanced IRT techniques, such as the Rasch model (Knoll and Houts 2012; Kunovskaya et al. 2014). Rather than limiting the assessment to the classical test theory models that focus primarily on measuring individual differences (Kirsch et al. 2013), the alternative assessment method, the Rasch method is proposed that will focus on the performance of national populations rather than individual respondents. The Rasch model is an ability measurement technique that has been widely used in education, and is recommended as one of the best approaches to performing worldwide evaluation processes (Serrão and Pinto-Ferreira 2015).
It is also important to note that the ability of the subset of questions to fully measure the 'financial knowledge' construct is not the purpose of the OECD/INFE assessment instrument, as 'it should not be assumed that the seven principles covered by financial education are sufficient to equip individuals with all the knowledge that they need' (OECD 2016). Furthermore, the purpose of this article is not to develop a new instrument but rather to assess the current instrument, thus the purpose of the discussion, was only to provide a general understanding of the questions used to assess financial knowledge for purposes of the Rasch analysis, which is the focus of this article. The point should however be emphasised, as highlighted in step 2, that the measurement instrument development process followed by the Measurement Sub-Group demonstrates that the process endeavoured to abide to best practice and drew on the best expertise around the world to provide the necessary information to address the pertaining policy objectives.
The focus of this article is to evaluate the psychometric quality and comparability across countries of the OECD/INFE financial knowledge assessment questions as presented in Annexure B using IRT. Based on nonlinear models between the measured latent variable and the item response, IRT enables independent estimation of item and person parameters and local estimation of measurement error. These properties of IRT are also the main theoretical advantages of IRT over CTT. Compared with classical test theory, a Rasch model (and other IRT models) provides the distinct benefit of a Wright map (also referred to as a person-item map. The visual appeal of this map enriches understanding and interpretation in suggesting to what extent the items cover the targeted range of the underlying scale and whether the items align with the target population (Progar and Sočan 2008; Cappelleri et al. 2014).
Sample and procedure utilised for the ILSA-and subsequent secondary analysis
The secondary data utilised for the evaluation of the applicability of OECD/INFE measurement to be classified as a successful ILSA were collected by participating countries in 2015 by means of personal in-home surveys. Respondents had to be 18 years of age or older, but not older than 79 years of age. The characteristics of 15,936 respondents across 11 (out of 30) countries are provided in Table 1. Data from Austria, Brazil, Canada, Croatia, Finland, Hungary, Hong Kong, Jordan, Russia, South Africa and the United Kingdom were used in this article. These countries provide quite a diverse distribution across various classifications, for example: (i) development phase as reflected by the Global Competitive Index (GCI), the Human Development Index (HDI), the United Nations World Economic Situation and Prospects (WESP) classification and the International Monetary Fund's World Economic Outlook (WEO) Groups; (ii) global membership (OECD country and G20 membership); and (iii) income groups as categorised by the World Bank. Countries included in this study did not constitute a homogeneous group but instead represented a range, as countries with a high level of development such as Austria, Canada, Finland, Hong Kong and the United Kingdom as well as transitional countries such as Brazil, Croatia, Hungary, Jordan, the Russian Federation and South Africa participated. These interviews generated datasets for 11 countries.
Table 1 Characteristics of the samples by countries.
Using the datasets of the 11 countries, therefore, provided the opportunity for the author of this article to use secondary data analysis to determine the psychometric qualities of the measurement instrument based on the 7 financial knowledge questions. This enabled the authors to assess whether the datasets were indeed internationally comparable and applicable, as is currently indicated in the traditional league tables based on average scores.
The mean age distribution was relatively equal, ranging from 41 to 48 years. Except for the respondents from Jordan, approximately 20% of the respondents were between 18 and 29 years of age; however, almost 50% of the respondents from Jordan were in the 18 to 29 years of age category. Regarding the top end of the age categories Jordan was once again the exception, with very few respondents above the age of 60 (only about 5%, compared with 20% to 30% in the other countries). Jordan was also the exception when it came to gender distribution, being the only country for which there were more males than females in the realised sample. The highest education attainment of the majority of respondents in Brazil, Croatia, Hungary and South Africa was secondary or less, whereas almost 50% of the respondents from Canada had a post-school qualification.
To analyse the data based on the Rasch model, the WINSTEPS® measurement computer program (Linacre 2017a) version 4..0 was utilised. The evaluation of the quality of the OECD/INFE financial knowledge assessment was done in three phases:
Assessment of whether the fundamental assumption of unidimensionality of the set of financial knowledge questions holds true: The assumptions and adequacy of the Rasch model for responses to the financial knowledge questions in the OECD/INFE measurement was tested. In Fig. 2, the structure of the unidimensional model is displayed, consisting of the expected number of items that should load to the composite factor of financial knowledge.
For the purposes of the Rasch model, the seven questions were recoded to reflect a dichotomous nature, as indicated in "Appendix A". One of the fundamental assumptions of the Rasch model is that the response probability of each respondent (or person) to each question (or item) is a function of the ratio of a person's ability to the item difficulty (Kunovskaya et al. 2014). The probability of the correct response of a person j to an item i is given by
$$P_{ij } \left( {x_{ij} = 1} \right) = \frac{{exp(\theta_{j - } \beta_{i} )}}{{1 + exp(\theta_{j - } \beta_{i} )}}$$
where xij is the response of person j to item i, θj is the latent ability of person j, and βi is the difficulty of item i.
Unidimensional Rasch model: financial knowledge measurement instrument. The structure of the unidimensional model is displayed, consisting of the expected seven items that should load to the composite factor of financial knowledge
Indices of the fit of the data to the model: Through the application of the Rasch model, item fit and difficulty estimates for each country, including person- and item reliability and separation indices were calculated to evaluate the set of individual items. The four indices measure, respectively, the replicability of person ordering we could expect if this sample of persons were given another parallel set of items measuring the same construct, the replicability of item placement along the pathway if these same items were given to another sample of the same size that behaved in the same way, the spread of ability across the sample so that the measures demonstrate a hierarchy of ability/development and lastly the number of standard errors of spread among the items(the spread or separation of items on the measured variable).
Psychometric appropriateness of the measurement instrument: Over and above the data fit indices, the psychometric appropriateness of the instrument to distinguish between the person ability and item difficulty were also provided. In addition to the identification of the specific item that was the most difficult for all the samples across all the countries, different item difficulty hierarchies were also identified across the countries based on the item fit orders. The results of the cross-national comparison were further enhanced through the review of the Wright maps of each country: these provided a combined view of item difficulty and person ability. The results of the Wright maps clearly indicate different person ability patterns in relation to the items.
Exploration of evidence of differential item functioning (DIF) between the countries: In order to determine whether the measurement items were biased as a function of a specific attribute, a DIF analysis was conducted. Measurement bias among various attributes can be evaluated in a DIF analysis, for example gender, racial or language (Boone et al. 2014, p. 274). However, given the aim of the OECD/INFE assessment being comparability across countries, the DIF was limited to determine whether the financial knowledge measurement items behave differently across a heterogeneous group of countries. This additional exploration is necessary to determine whether the assumption that item difficulties are homogeneous across the different countries is in fact true.
Results of the Rasch model
Unidimensionality
Starting with phase one, the following section will first determine whether the data satisfied the fundamental assumption of the Rasch model of unidimensionality. Wright's Unidimensionality Index, based on the ratio of the real (misfit inflated) standard errors divided by the model standard errors, was used (Wright 1994). Unidimensionality can be assumed if the value is above .9, while values of .5 and below indicates multidimensionality. The index values are reflected in Table 2. The results indicate an index value above .9 for all countries, signalling unidimensionality.
Table 2 Results of Unidimensionality.
A Rasch residual-based principal component analysis (PCA) was also conducted to confirm unidimensionality. The results from this type of PCA are more indicative than definitive indicators (Linacre 2017b). Secondary dimensions are identified in the data by breaking down the observed residuals. The analysis identifies any common variance among those aspects of the data that remained unexplained or unmodeled by the primary Rasch measure. Eigenvalues above 2 for the first contrast typically indicate the presence of multiple dimensions and associations between data (Linacre 2017b). A PCA performed on the residuals demonstrated first contrast eigenvalues smaller than 2, ranging from 1.56 (Jordan) to 1.99 (Croatia). It was thus confirmed that the unidimensionality assumption for the Rasch model held for all 11 countries.
Model fit assessment
Boone et al. (2014) and Linacre (2017b) provide guidance regarding the adequacy assessment process, starting with an evaluation of how well the data conform to the Rasch model, in other words, the model fit assessment. To determine the model fit, Linacre (2017b) suggests that the mean square fit statistic (infit) has an expectation of 1 and the standarised fit statistic should approximate a theoretical mean of 0. If we apply these guidelines, the results in Table 3 suggest that the data do fit the model reasonably well, as the mean square was 1.0 or close to 1 (.98 or .99) in each country (column 3) and the standardised statistic (column 4) was 0 for all countries except Hungary, for which it was nevertheless close to 0 (− .1).
Table 3 Financial knowledge test: summary statistics of Rasch modelling for non-extreme persons by country.
In Rasch terms, Winstep provides several key reliability indices, as indicated in Table 4. It is important to note that for the purposes of the reliability evaluation, the author agrees with Boone et al. (2014) regarding the standard procedures regarding the exclusion of extreme persons or outliers (i.e. those respondents who had either nothing correct, or everything correct). In the case of a person having everything correct, it is not possible to gauge from the assessment how much more knowledgeable the person really is—was the assessment the plateau of their knowledge, or do they actually know a lot more about the topic?—resulting in an infinite error estimation. This infinite error size does not assist in the assessment of the differentiating ability of the instrument, and therefore the 48 extreme people (South Africa—33; Croatia—1; Russia—8; Austria—6), are excluded in the reliability assessment.
Table 4 Reliability assessments based on non-extreme persons and 7 non-extreme items by country.
According to Linacre (2017b), the reliability index can be interpreted in a way similar to the more well-known Cronbach Alpha indicator and is not indicative of the quality of the data, but rather of the reproducibility of the instrument. As these values are influenced by large sample sizes, and all the samples comprised 1000 respondents or more, it is necessary to consider the separation index which indicates the number of standard errors of spread among the persons (or items).
The estimated person reliability (Table 4) in all 11 countries was very low, at between 0 and .36 which are below 2. Low values of person reliability might indicate a narrow persons' ability range, or may be related to the small number of items on the test. The person separation estimates are indicative of the sensitivity of the test instrument for distinguishing between high and low performers (Linacre 2017b). Separation estimates can range from 0 to infinity, with a higher value being preferred (Boone et al. 2014). Person separation estimates for the test in each country (Table 5) were less than 2 for all countries, meaning that the test instrument was not sensitive enough to distinguish between high and low performers. The results could have been influenced by the exclusion of the outliers. The measure of the effect of this exclusion is however beyond the scope of this article.
Table 5 Financial knowledge test: summarised item statistics by country
The item reliability indices were good with all values above .9. On the other hand, item separation estimates are used to verify the item hierarchy, with low estimates signalling that the sample was not big enough to precisely locate the items on the latent variable (Linacre 2017b). The separation values of the items were however, high, ranging much higher than the threshold value of 3, between 12.38 and 30.93, and indicated a large spread of the items along the item difficulty hierarchy.
The item reliability index and the item separation index values for all countries thus indicated replicability and a spread of items across the item hierarchy as the item reliability indices are above .9 and the item separation indices are above 3. However, the person reliabilities (less than .8 and the separation indices (below 2) highlight that that the test were not sensitive enough to distinguish between high and low performers across all countries.
The root mean square error, RMSE, is a further measure of a lower limit to the reliability of measures based on this set of items for this sample. A value close to 0 indicates a good fit. Low RMSE values for items were observed for all the countries, thereby indicating reliability of item estimates. The RMSE values for persons were very high (between 1.08 and 1.33) (Table 5) and signalled that the data were not an adequate fit. The RMSE results are thus in alignment with the results of the person and item reliability and separation indices.
Psychometric appropriateness of the measurement instrument
Item fit and difficulty estimates
Following the model fit assessment, the next step in the adequacy assessment entailed evaluation of the item fit and difficulty estimates to identify unexpected patterns. The item fit statistics and the measure order, an estimate of item difficulty, of items on the test are summarised in Table 5,Footnote 1 where the items are arranged from the most (largest positive logit value) to the least difficult (largest negative logit value). According to Linacre (2017b), the difficulty of an item is defined as "the point on the latent variable (unidimensional continuum) at which its high and low categories have equal probability of being observed." The reported logit values for the difficulty of items are arranged in Table 61 from the most to the least difficult items.
Table 6 Item Overfit and Underfit Assessment.
The difficulty spread of the items as per Table 5 was between 3.67 (highest difficulty measure (Hungary)) and − 3.21 (lowest difficulty measure (Finland)). Both of these values are above the − 3 to + 3 logit range, indicating behaviour outside the range that indicates a "balanced" test. An item difficulty measure above 3 for question 6 was recorded for Jordan, Hong Kong and Brazil, indicating behaviour that deviated more than expected. Furthermore, it is clear from the analysis in Table 6 that no two countries had a similar item difficulty pattern. The only common feature was that the composite question 6 was the most difficult item across all countries, and question 4 was the easiest across 7 of the 11 countries.
In terms of the outfit MNSQ for all items reported, items outside the acceptable range (< .75 or > 1.3) (Bond and Fox 2014) were observed for each country. Table 6 shows items that overfit (which indicate too little variation and a too determined response pattern) as well as items that underfit (too much variation and a too haphazard response pattern). It is important to note that both question 5 (8 out of the 11 countries) and question 6 (5 out of the 11 countries) were indicated as items that overfit, while item questions 7a (5 out of the 11 countries) and question 3 (time value of money) (4 of the 11 countries) were indicated as items that underfit.
Wright maps (also referred to as person–item maps) provide critical insights into person's achievement and item difficulty graphically demonstrated on one logit scale. Person fit statistics, provided in Table 4, also require further investigation of the person ability distributions for each country. In Figs. 3 and 4, the person ability distribution is indicated by the # in the section above the line and the item numbers (QK3 to QK7c) below the line shows the distribution of the set of questions. Lower person measures and lower item difficulties are presented on the left-hand side of the Wright map. Higher knowledgeable respondents and items that are more difficult are presented on the right-hand side of the Wright map. The letter 'M' is indicative of the mean difficulty score (the top one indicating the mean score for the group of respondents for the specific country) and the bottom one indicating the mean logit for the seven items. The letters 'S' and 'T' respectively indicate one and two standard deviations from the mean. Finally, '0' indicated that participants with an average ability compared with the rest of the respondents had a 50% probability of answering an item of average difficulty correctly. Figures 3 and 4 presents the Wright maps with all items and respondents for the 11 countries under review.
Wright maps: financial knowledge assessment. Wright maps portray the two dimensions of (i) ability of the respondents and (ii) difficulty of the questions on one illustration. The placement of the respondents ability compared to the difficulty of the questions give an indication of how closely (or not) the participants abilities are compared to the difficulty of the measurement instrument
The Wright map (Figs. 3 and 4) shows some misalignment of persons and items for the majority of the countries where the average person position was at a higher point on the logit scale than the average item position. For countries such as Austria, Finland, Hong Kong, Hungary, Russia and the United Kingdom there seems to be mistargeting between the distribution of persons and items on the maps, demonstrated by the high number of persons whose positions were above where the financial knowledge items were measuring. The misalignment was the greatest for Hong Kong, indicating that the questions might be too easy for respondents from there. The Wright map shows that for Brazil, Croatia and South Africa the test matched well with the abilities of the samples. For Hong Kong, Austria, Russia and Finland, the test is also potentially too easy. For the rest of the countries, namely Canada, Hungary, Jordan and The United Kingdom the map also shows that the test was relatively easy, but to a lesser extent.
The results up to now indicated that although the data for each country fitted the Rasch model, large differences were observed in terms of both the item difficulty order and misalignment of persons and items on the Wright maps (Figs. 3 and 4) across countries. This indicated the need for further exploration of the differences across countries. DIF was subsequently used to determine whether the assumption that item difficulty was homogeneous across the countries under review could be deduced.
Results of the assessment of the homogeneity of item difficulty across countries
Test item bias or DIF determine whether an item measures equally for different subgroups. A biased or DIF item is one for which the probability of success is not the same for equally able test takers from different subgroups. Ertuby and Russel (1996), as quoted by De Beer (2004), suggest that because of their greater sophistication, IRT procedures provide the best results for detecting cultural differences on particular items. The null hypothesis that differences are due to chance alone was tested, and the results shown in Table 7. The null hypothesis is rejected for each item, indicating that the observed DIF was not due to chance alone for all 7 items.
Table 7 Person summary DIF between class and group item.
Statistical significance tests such as DIF tests are, however, always of doubtful value in a Rasch context because differences can be statistically significant, but far too small to have any impact on the meaning, or practical use, of the measures. Both statistical significance and substantive differences are needed before action should be considered. In order to determine substantive differences (> .5), Figs. 3 and 4 shows the DIF SIZE, which is the difference between the DIF MEASURE for a country and the AVERAGE DIFFICULTY (MEASURE) for each item across all the countries. The DIF measure is the item difficulty for each country and the average difficulty is the overall difficulty of an item for all countries combined.
It is clear from Table 8 that there are substantive differences across all the questions, especially regarding question 7b (the definition of inflation) which reported difference across 7 of the 11 countries. In contrast, question 7a (risk and return) had the least substantive difference with only two countries, Croatia and Hong Kong reporting significant differences. Through more detailed analysis of the DIF size for each question (see Figs. 5, 6, 7, 8, 9, 10, 11), the differences across countries become more evident. A straight line presents the baseline difficulty, and the DIF size is plotted for each country. An absolute value of above .5 indicates a substantive difference.
Table 8 DIF size illustration across all countries
DIF size: question 3—time value of money
DIF size: question 4—interest paid on loan
DIF size: question 5—interest plus capital
DIF size: question 6—compound interest
DIF size: question 7a—risk and return
DIF size: question 7b—inflation
DIF size: question 7c—diversification
Figure 5 indicates that for QK3, the 'time value of money' question, the average difficulty measure indicated a value of .5. The DIF size for Jordan, Canada, the United Kingdom and South Africa were positive, indicating that respondents found the question more difficult than the average difficulty for all countries combined, with South Africa and the United Kingdom experiencing the question as the most difficult. Croatia experienced the question as being at exactly the same difficulty level as the average difficulty of all countries combined. The countries that experienced the question as less than the average difficulty are Finland, Brazil, Russia, Hungary, Hong Kong, Canada and Austria, with Finland experiencing the question as least difficult.
Regarding QK4 (Fig. 6), relating to the 'interest paid on loan' question, the average difficulty measure indicated a value of − 1.5, suggesting that the question was much easier than QK3. Similar to QK3, the DIF size for Jordan and South Africa were positive, indicating that they experienced this question as being more difficult than the average difficulty for all countries combined. Whereas Brazil experienced QK3 as less difficult, the same is not true in the case of QK4. On the other side of the scale, of all the countries, Finland experienced the question as the least difficult. Hong Kong and Austria, with similar results, were almost on par with the average difficulty across all countries.
The difficulty across all countries regarding the concept of compound interest (QK6—Fig. 8) is evident, with the average difficulty measure being at 2.5—up from .4 with reference to the concept of simple interest alone (QK5—Fig. 7). As the result of QK6 was calculated based on the respondent having both QK5 and QK6 correct, the mistargeting regarding compound interest is worrisome.
However, in contrast to the previous question, the spread of the difficulty measurement across all countries is fairly limited for QK6, and so compound interest seems to be problematic across all countries which support the results of Table 7 and Fig. 8.
The average difficulty measure for both QK7a (Risk and return—Fig. 9) and QK7b (Inflation—Fig. 10) was just below − 1.5, thus indicating that these two questions were relatively easy compared with QK6 (Fig. 8).
QK7c (Diversification—Fig. 11) was the only question for which the average difficulty measure across all countries was 0, indicating that this question was on par.
Based on the examination of the DIF size per question, substantive differences are evident across all the questions. Based on the DIF size of 2.6, QK6 was by far the most difficult questions for respondents across all the countries. In contrast, QK 4 was much easier with a DIF size of − 1.44. The number of countries reporting substantive differences also differed per question, ranging from 2 countries (QK7a and QK7b) to 6 countries (QK7b).
Although the data conformed to the unidimensionality test for purposes of the Rasch model, the preceding DIF results being indicative of substantive differences among the responses to the question prompted the question of construct validity. It was therefore decided to revert to CTT, namely optimal scaling to reassess the dimensionality of the seven questions for each country due to the binary nature of the data as the data was recoded for purposes of the Rasch analysis. In optimal scaling numerical quantifications are assigned to the categories of each variable, thus allowing standard procedures to be used to obtain a solution on the quantified variables.
The optimal scale values are assigned to categories of each variable based on the optimizing criterion of the procedure in use. Unlike the original labels of the nominal or ordinal variables in the analysis, these scale values have metric properties. The optimal quantification for each scaled variable is obtained through an iterative method called alternating least squares in which, after the current quantifications are used to find a solution, the quantifications are updated using that solution. The updated quantifications are then used to find a new solution, which is used to update the quantifications, and so on, until the criterion is reached that signals the process to stop. As the aim of the analysis was data reduction and the optimal scaling level was multiple nominal, multiple correspondence analysis were conducted to determine the dimensionality.
Multiple correspondence analysis quantifies nominal (categorical) data by assigning numerical values to the cases (objects) and categories so that objects within the same category are close together and objects in different categories are far apart. Each object is as close as possible to the category points of categories that apply to the object. In this way, the categories divide the objects into homogeneous subgroups. Variables are considered homogeneous when they classify objects in the same categories into the same subgroups. As all the variables have multiple nominal scaling levels, multiple correspondence analysis is identical to categorical principal components analysis. The results are shown in Table 9.
Table 9 Results of the multiple correspondence analysis
The results in Table 9 indicates that a two dimensional structure was observed for all countries, except for SA and the UK where a three dimensional structure was observed. The numbers 1, 2 and 3 in Table 9 indicate the factor on which a specific item load. Brazil and Hong Kong displayed a factor structure where the same items load onto the two respective dimensions. Canada and Croatia also had a similar two-dimension factor structure but not the same structure as that of Brazil and Hong Kong. Thus, although the results indicated two dimensions, the questions determining the two dimensions were not consistent across the countries.
Discussion of results
The importance and value of ILSAs has been well documented, but the importance of ensuring that the interpretation of the outcomes is both absolutely and comparatively correct led to the assessment of the OECD/INFE financial literacy measurement instrument, but more specifically the financial knowledge domain, reported on in this article. It is evident from the discussion on the development of the assessment framework and the operationalisation exercise conducted by the OECD/INFE Measure Subgroup that a lot of effort was taken in the design and development of the comprehensive measurement instrument to ensure valid and reliable results. Based on the salient features of test development in large-scale assessments, the instrument developers aspired to construct validity to measure basic financial knowledge but was challenged with limiting the length of the overall assessment. Brevity, respondent fatigue on the one hand had to be balanced with construct validity and policy objectives on the other hand. Given that one of the aims of the OED/INFE ILSA was to provide benchmarks against countries can compare themselves, questions were selected to be indicative but not confirmatory of full coverage of the various topic domains. This also holds true for the financial knowledge assessment which is the focus of this article.
Braun (2013) reflects that countries involved in ILSAs should reflect upon and review the implications for their own jurisdiction, and the results of the Rasch model clearly support this notion. Based on the Rasch model applied in this article, it was evident that the datasets utilised in the assessment of the OECD/INFE of adult financial knowledge assessment do adhere to the foundational assumption of unidimensionality, thereby paving the way for the comprehensive review of the quality of the measurement instrument focused on the limited questions measuring financial knowledge.
However, in terms of the model fit assessment, the data do fit the Rasch model reasonable well, but the reliability indices employed do indicate a mismatch between the respondents to the assessment and the item difficulty across the various questions. Based on the item difficulty evaluation in conjunction with the person and item distribution from a narrow to a broad band based on the Wright maps across the logit scale, the measurement instrument was shown to be not necessarily as discriminating as one would expect. Evaluation of the results for each country provides confirmation that for certain countries the survey instrument might be on par, but for a country such as Hong Kong, the items appeared to be too easy to really distinguish between higher and lower achievers—the majority of the respondents found the questions very easy. The measurement of the effect of the exclusion of the outliers (those who achieved 0 and 7), especially in the case of Hong Kong, was not determined in this article but could have an impact on the reported results and should be considered in future analysis of the utilisation of this assessment instrument.
Given the exclusion of the outliers, the instrument does not necessarily assist the Hong Kong government in identifying problematic areas that would require additional financial knowledge in the Hong Kong context. Compared with other countries (similar to the traditional ranking exercise), there is clearly a mistargeting between the difficulty of the questions and the ability of the respondents. The opposite is possibly true for the South African respondents, with the majority of the respondents experiencing the questions as too difficult in relation to their ability. The reasons for the discrepancy in the financial knowledge results (person ability versus item difficulty) might be attributed more to a lack of the underlying required competencies such as numeracy rather than to the lack of financial knowledge per se. Thus, although it seems that there is evidence of internal validity of the OECD/INFE adult financial knowledge assessment in terms of the reliability and item fit assessment, the results are not as convincing as one would have expected.
By means of the evaluation of the psychometric appropriateness of the measure instrument, further problems regarding the comparability of the results were identified. In terms of the item fit and difficulty estimates, compound interest (QK6) was the most difficult question across all countries but are respondents more informed regarding nominal interest. Given the importance of compound interest for both debt and savings, this important issue should be taken into consideration across measurement as well as financial education initiatives. Households with high-debt levels might not understand the implications of compound debt on their long-term repayments. By understanding the benefits of compound interest over time, household could achieve much higher levels of financial security, should they start saving early enough.
However, focusing on the quality of the individual questions to provide reliable and valid results required for a cross-country assessment, the DIF results do not support the cross-country assessment. High levels of variance across the financial knowledge assessment questions among the participants in the different countries were experienced. The average difficulty of the individual questions differed from − 1.44 to 2.6, but as illustrated in Figs. 5, 6, 7, 8, 9, 10, 11, no consistent pattern regarding the distribution amongst the various countries could be identified. This result was pre-empted by the inconsistent patterns reported in over- and underfit assessments as well.
Faced with the high level of invariance, the decision was made to reassess the dimensionality of the seven questions. Through the application of multiple correspondence analysis is was determined that more than two, and even three dimensions for South Africa and the United Kingdom, were detected across the various countries. None of the countries reflected a single dimension. This result could be influenced by the nature of the underlying questions as the first four questions were multiple choice compared to the true/false options of the last three questions. The two dimensions (as per Table 9) for Brazil, Canada, Croatia and Hong Kong strongly reflects that nature of the question scales (i.e. first four versus the last three) and thus might be influenced by the framing of the question and not necessarily by the content assessment.
The overall assessment, informed by the psychometric evaluation, emphasises that the current set of financial knowledge questions should be reconsidered and possible be adapted for purposes of an international large-scale assessment.
More comprehensive information is available from the author on request.
CTT:
Classical Test Theory
DIF:
differential item functioning
EGRA:
Early Grade Reading Assessment
group of twenty
GFLEC:
Global Financial Literacy Excellence Center
HDI:
ILSA:
international large-scale assessments
IRT:
Item Response Theory
OECD:
OECD/INFE:
Economic Co-operation and Development/International Network for Financial Education
PASEC:
Programme for the Analysis of Education Systems
PCA:
PIRLS:
Progress in International Reading Literacy Study
PISA:
Programme for International Student Assessment
RMSE:
root mean square error
TIAA:
Institute Teachers Insurance and Annuity Association of America Institute
TIMMS:
Trends in International Mathematics and Science Study
WEF:
WEO:
International Monetary Fund's World Economic Outlook
WESP:
United Nations World Economic Situation and Prospects
Alessie RJM, Van Rooij MCJ, Lusardi A (2011) Financial literacy and retirement planning in the Netherlands. J Pension Econ Finance 10(4):527–545. https://doi.org/10.1016/j.joep.2011.02.004
Atkinson A (2011) Measuring financial capability using a short survey instrument: Instruction manual. University of Bristol, Bristol
Atkinson A, Messy F (2011) Assessing financial literacy in 12 countries: an OECD/INFE international pilot exercise. J Pension Econ Finance 10(4):657–665. https://doi.org/10.1017/S1474747211000539
Atkinson A, Messy F (2012) Measuring financial literacy: results of the OECD/International Network on Financial Education (INFE) pilot study. 15. https://doi.org/10.1787/5k9csfs90fr4-en
Bond T, Fox CM (2014) Applying the Rasch model: fundamental measurement in the human sciences, 3rd edn. Routledge, New York
Boone WJ, Staver JR, Yale MS (2014) Rasch analysis in the human sciences. Springer, Berlin
Braun H (2013) Chapter 8: prospects for the future: a framework and discussion of directions for the next generation of international large-scale assessments. In: Von Davier M et al (eds) The role of international large-scale assessments: perspectives from technology, economy, and educational research. Springer, London, pp 149–160
Cappelleri JC, Lundy JJ, Hays RD (2014) Overview of classical test theory and item response theory for quantitative assessment of items in developing patient- reported outcome measures. Clin Ther 36(5):648–662. https://doi.org/10.1016/j.clinthera.2014.04.006.Overview
De Beer M (2004) Use of differential item functioning (DIF) analysis for bias analysis in test construction. SA J Ind Psychol 30(4):52–58
Ertuby C, Russel RJH (1996) Dealing with comparability problem of cross-cultural data. In: Paper presented at the 26th international congress of psychology, Montreal, 16–21 August 1996
Kempson E (2009) Framework for the development of financial literacy baseline surveys: a first international comparative analysis. 1. http://dx.doi.org/10.1787/5kmddpz7m9zq-en
Kirsch I, Lennon M, Von Davier M, Gonzalez E, Yamamoto K (2013) Chapter 1: On the growing importance of international large-scale assessments. In: Von Davier M et al (eds) The role of international large-scale assessments: perspectives from technology, economy, and educational research. Springer, London, pp 1–11
Knoll MAZ, Houts CR (2012) The financial knowledge scale: an application of item response theory to the assessment of financial literacy. J Consum Aff 46(3):381–410. https://doi.org/10.1111/j.1745-6606.2012.01241.x
Krathwohl DR (2002) A revision of bloom's taxonomy: an overview. Theory into practice. 41(4). https://pdfs.semanticscholar.org/b479/833ef239f84f904085089b8a434c6346cd48.pdf. Accessed 11 May 2018
Kunovskaya IA, Cude BJ, Alexeev N (2014) Evaluation of a financial literacy test using classical test theory and item response theory. J Fam Econ Issues 35(4):516–531. https://doi.org/10.1007/s10834-013-9386-8
Lietz P, Cresswell JC, Rust KF, Adams RJ (2017) Implementation of large-scale education asessments. In: Lietz P, Cresswell JC, Rust KF, Adams RJ (eds) Implementation of large-scale education assessments. Wiley, Hoboken, pp 1–25
Linacre JM (2017a) Winsteps® Rasch measurement computer program. Winsteps.com, Beaverton
Linacre JM (2017b) Winsteps® Rasch measurement computer program User's Guide. Winsteps.com, Beaverton
Lusardi A, Mitchell OS (2007) Baby Boomer retirement security: the roles of planning, financial literacy, and housing wealth. J Monetary Econ 54(1):205–224. https://doi.org/10.1016/j.jmoneco.2006.12.001
Lusardi A, Mitchell OS (2009) How ordinary consumers make complex economic decisions: financial literacy and retirement readiness, NBER Working Paper Series. 15350. https://doi.org/10.1142/s2010139217500082
Lusardi A, Michaud P, Mitchell OS (2017) Optimal financial knowledge and wealth inequality. J Political Econ 125(2):431–477
OECD (2011) Measuring financial literacy: questionnaire and guidance notes for conducting an internationally comparable survey of financial literacy. http://www.oecd.org/daf/fin/financial-education/49319977.pdf. Accessed 10 Jan 2018
OECD (2013) OECD/INFE toolkit to measure financial literacy and inclusion GuIDaNCE, COrE quEstIONNaIrE aND supplEmENtary quEstIONs. Available at: http://www.oecd.org/daf/fin/financial-education/TrustFund2013_OECD_INFE_toolkit_to_measure_fin_lit_and_fin_incl.pdf. Accessed 20 Jan 2018
OECD (2015) 2015 OECD/INFE toolkit for measuring financial literacy and financial inclusion. http://www.oecd.org/daf/fin/financial-education/2015_OECD_INFE_Toolkit_Measuring_Financial_Literacy.pdf. Accessed 10 Jan 2018
OECD (2016) OECD/INFE international survey of adult financial literacy competencies. Paris, Paris
OECD (2017) measuring financial literacy—OECD. http://www.oecd.org/finance/financial-education/measuringfinancialliteracy.htm. Accessed 25 Jan 2018
Progar Š, Sočan G (2008) An empirical comparison of item response theory and classical test theory. Horizons Psychol 17(3):5–24
Serrão A, Pinto-Ferreira C (2015) PISA—models and the reality. In: Pixel (ed.) The future of education international conference—5th edition. Florence
Van Rooij M, Lusardi A, Alessie R (2007) Financial literacy and stock market participation. 13565. http://www.nber.org/papers/w13565. Accessed 20 Aug 2018
Van Rooij M, Lusardi A, Alessie R (2011) Financial literacy and stock market participation. J Fin Econ 101(2):449–472
World Economic Forum (WEF) (2017) The global risks report 2017. 12th Edition. Geneva, Switzerland. http://www3.weforum.org/docs/GRR17_Report_web.pdf. Accessed 23 Jan 2018
Wright BJ (1994) Rasch factor analysis. In: Conference proceeding at the annual meeting of the Midwestern Educational Research Assocation. https://files.eric.ed.gov/fulltext/ED380476.pdf. Accessed 25 Jan 2018
Zinni MB (2013) Identifying drivers for the accumulation of household financial wealth. 264. http://papers.ssrn.com/paper.taf?abstract_id=2214962. Accessed 20 Aug 2018
Sole author, assisted by in-house statistician. The author read and approved the final manuscript.
The author declares no competing interests.
As a member of the OECD/INFE research group I have access to the data used in the analysis. Should other parties wish to use the data, permission should be obtained from the OECD/INFE.
I am the only author to the paper and therefore approve the manuscript for submission. This manuscript has not been published or submitted for publication elsewhere.
Ethical approval and consent to participate
Given the secondary nature of the data, no direct contact with the original participants was made. However, ethical approval for the project was obtained from the Ethical Committee of the College of Accounting Sciences, Unisa before embarking on the study. Approval was also obtained from the OECD/INFE for usage of the secondary data.
No external funding was obtained for the project. Internal support from the University of South Africa was utilised, such as the services of a statistician.
Department of Taxation, UNISA, Pretoria, South Africa
Bernadene de Clercq
Search for Bernadene de Clercq in:
Correspondence to Bernadene de Clercq.
The financial knowledge test.
See Table 10.
Table 10 Box 1: Extract from OECD (2016, p 84) on calculation of financial knowledge score. The financial knowledge score is computed as the number of correct responses to the financial knowledge questions, according to Table 1. It ranges between 0 and 7 (it is also possible to replicate the 8-point score created in 2012 for countries using QK2 by adding the additional response)
de Clercq, B. A comparative analysis of the OECD/INFE financial knowledge assessment using the Rasch model. Empirical Res Voc Ed Train 11, 8 (2019) doi:10.1186/s40461-019-0083-1
Received: 22 March 2018
Measurement instrument
OECD/INFE
Rasch analysis
Assessing Financial Literacy as a Basis for Designing and Evaluating Interventions in Vocational and Adult Education and Training | CommonCrawl |
How does one integrate $\int \cos(x^2) dx$
How does one integrate $\int \cos(x^2) dx$?
I have thought about using standard techniques of 'integration by parts' and 'partial fractions' but neither of them work. I tried plugging it into Wolfram Alpha and I got $\sqrt{\frac{\pi}{2}} C {\sqrt{\frac{2}{\pi}}} + $ a constant.
I am confused as to what the $C$ is supposed to mean and how to evaluate this intergral.
letsmakemuffinstogetherletsmakemuffinstogether
$\begingroup$ At the bottom right of the output cell on WolframAlpha it says that "$C(x)$ is the Fresnel C integral". If you hover your mouse over that text, several links (such as this one) pop up describing it. The integral cannot be expressed as a finite combination of elementary functions. $\endgroup$ – Mark McClure Mar 21 '15 at 6:42
$\begingroup$ $$\int_{-\infty}^\infty\sin\big(x^2\big)~dx ~=~ \int_{-\infty}^\infty\cos\big(x^2\big)~dx~=~\sqrt{\frac\pi2}$$ and $$\int_{-\infty}^\infty e^{-x^2}~dx~=~\sqrt\pi$$ The two identities above, related to Fresnel and Gaussian integrals, are linked to each other by way of Euler's formula. $\endgroup$ – Lucian Mar 21 '15 at 8:19
$\begingroup$ reference.wolfram.com/language/ref/FresnelC.html $\endgroup$ – Aditya Hase Mar 21 '15 at 16:41
There's no elementary integral for this function i.e. it can't be solved in closed form. This is because there's no closed form anti-derivative of cos($x^2$). You'll have to use some kind of numerical method to solve it.The most straightforward method of solving it is to use the Taylor expansion of cosine replacing x with $x^2$:
$$\cos(x^2) = \sum_{n=0}^\infty (-1)^n \frac{(x^2)^{2n}}{(2n)!}$$
and then integrating term by term within a certain domain of accuracy. What C is,I'm not sure without access to Wolfram's database, but it's probably some kind of numerical integral the algorithm uses in this case.
The integral can also be solved using complex analysis methods, but I was going on the assumption you aren't knowledgible on complex function theory. If you're interested in the theory and methods of such integrals further, there's a very nice discussion of it by Brian Conrad here.
Mathemagician1234Mathemagician1234
Not the answer you're looking for? Browse other questions tagged integration or ask your own question.
How to integrate $\int\frac{x}{1+x^3}dx$?
How to integrate $\displaystyle \int \frac{2x^2+x}{(x+1)(x^2+1)}dx$
Integrate $\int_0^\pi{{x\sin x}\over{1+\cos^2x}}dx$.
How to evaluate the integral of $\sqrt{\sin\sqrt x}\cos \sqrt x / ( 1+x^2)$?
Help me solve $\int \ln(2x+1)dx$
How to integrate$ I=\int\ln\left(\frac{L}{2}+\sqrt{\frac{L^2}{4}+y^2+z^2}\right)\ \mathrm dy $
How to integrate $\frac{1}{x\sqrt{x}}$
Evaluating $\int \frac{1-7\cos^2x}{\sin^7x\cos^2x}dx$
How to integrate $\frac{1}{1+x^4}$
Evaluate $\int \frac{dx}{\cos(x+a) \cos(x+b)}$ | CommonCrawl |
Difference between revisions of "Subseries convergence"
127.0.0.1 (talk)
(Importing text file)
Maximilian Janisch (talk | contribs)
m (AUTOMATIC EDIT (latexlist): Replaced 26 formulas out of 26 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202901.png" /> is a Hausdorff Abelian [[Topological group|topological group]], a series <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202902.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202903.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202905.png" />-subseries convergent (respectively, unconditionally convergent) if for each subsequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202906.png" /> (respectively, each permutation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202907.png" />) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202908.png" />, the subseries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s1202909.png" /> (respectively, the rearrangement <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029010.png" />) is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029011.png" />-convergent in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029012.png" />. In one of the early papers in the history of [[Functional analysis|functional analysis]], W. Orlicz showed that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029013.png" /> is a weakly sequentially complete [[Banach space|Banach space]], then a series in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029014.png" /> is weakly unconditionally convergent if and only if the series is norm unconditionally convergent [[#References|[a5]]]. Later, he noted that if "unconditional convergence" is replaced by "subseries convergence" , the proof showed that the weak sequential completeness assumption could be dropped. That is, a series in a Banach space is weakly subseries convergent if and only if the series is norm subseries convergent; this result was announced in [[#References|[a1]]], but no proof was given. In treating some problems in vector-valued measure and integration theory, B.J. Pettis needed to use this result but noted that no proof was supplied and then proceeded to give a proof ([[#References|[a6]]]; the proof is very similar to that of Orlicz). The result subsequently came to be known as the Orlicz–Pettis theorem (see [[#References|[a3]]] for a historical discussion).
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
was used.
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
Out of 26 formulas, 26 were replaced by TEX code.-->
{{TEX|semi-auto}}{{TEX|done}}
If $( G , \tau )$ is a Hausdorff Abelian [[Topological group|topological group]], a series $\sum x _ { k }$ in $G$ is $\tau$-subseries convergent (respectively, unconditionally convergent) if for each subsequence $\{ x _ { n_k } \}$ (respectively, each permutation $\pi$) of $\{ x_k \}$, the subseries $\sum _ { k = 1 } ^ { \infty } x _ {{ n } _ { k }}$ (respectively, the rearrangement $\sum _ { k = 1 } ^ { \infty } x _ { \pi ( k )}$) is $\tau$-convergent in $G$. In one of the early papers in the history of [[Functional analysis|functional analysis]], W. Orlicz showed that if $X$ is a weakly sequentially complete [[Banach space|Banach space]], then a series in $X$ is weakly unconditionally convergent if and only if the series is norm unconditionally convergent [[#References|[a5]]]. Later, he noted that if "unconditional convergence" is replaced by "subseries convergence" , the proof showed that the weak sequential completeness assumption could be dropped. That is, a series in a Banach space is weakly subseries convergent if and only if the series is norm subseries convergent; this result was announced in [[#References|[a1]]], but no proof was given. In treating some problems in vector-valued measure and integration theory, B.J. Pettis needed to use this result but noted that no proof was supplied and then proceeded to give a proof ([[#References|[a6]]]; the proof is very similar to that of Orlicz). The result subsequently came to be known as the Orlicz–Pettis theorem (see [[#References|[a3]]] for a historical discussion).
Since the Orlicz–Pettis theorem has many applications, particularly to the area of vector-valued measure and integration theory, there have been attempts to generalize the theorem in several directions. For example, A. Grothendieck remarked that the result held for locally convex spaces and a proof was supplied by C.W. McArthur. Recent (1998) results have attempted to push subseries convergence to topologies on the space which are stronger than the original topology (for references to these results, see the historical survey of [[#References|[a4]]]).
In the case of a [[Banach space|Banach space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029015.png" />, attempts have been made to replace the [[Weak topology|weak topology]] of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029016.png" /> by a weaker topology, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029017.png" />, generated by a subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029018.png" /> of the dual space of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029019.png" /> which separates the points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029020.png" />. Perhaps the best result in this direction is the Diestel–Faires theorem, which states that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029021.png" /> contains no subspace isomorphic to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029022.png" />, then a series in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029023.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029024.png" /> subseries convergent if and only if the series is norm subseries convergent. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029025.png" /> is the dual of a Banach space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029026.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s120/s120290/s12029027.png" />, then the converse also holds (see [[#References|[a2]]], for references and further results).
In the case of a [[Banach space|Banach space]] $X$, attempts have been made to replace the [[Weak topology|weak topology]] of $X$ by a weaker topology, $\sigma ( X , Y )$, generated by a subspace $Y$ of the dual space of $X$ which separates the points of $X$. Perhaps the best result in this direction is the Diestel–Faires theorem, which states that if $X$ contains no subspace isomorphic to $\text{l} ^ { \infty }$, then a series in $X$ is $\sigma ( X , Y )$ subseries convergent if and only if the series is norm subseries convergent. If $X$ is the dual of a Banach space $Z$ and $Y = Z$, then the converse also holds (see [[#References|[a2]]], for references and further results).
J. Stiles gave what is probably the first extension of the Orlicz–Pettis theorem to non-locally convex spaces; namely, he established a version of the theorem for a complete metric linear space with a Schauder basis. This leads to a very general form of the theorem by N. Kalton in the context of Abelian topological groups (see [[#References|[a4]]] for references on these and further results).
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> S. Banach, "Théoriè des opérations linéaires" , Monogr. Mat. Warsaw (1932)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> J. Diestel, J. Uhl, "Vector measures" , ''Surveys'' , '''15''' , Amer. Math. Soc. (1977)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> W. Filter, I. Labuda, "Essays on the Orlicz–Pettis theorem I" ''Real Anal. Exch.'' , '''16''' (1990/91) pp. 393–403</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> N. Kalton, "The Orlicz–Pettis theorem" ''Contemp. Math.'' , '''2''' (1980)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> W. Orlicz, "Beiträge zur Theorie der Orthogonalent wicklungen II" ''Studia Math.'' , '''1''' (1929) pp. 241–255</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> B.J. Pettis, "On integration in vector spaces" ''Trans. Amer. Math. Soc.'' , '''44''' (1938) pp. 277–304</TD></TR></table>
If $( G , \tau )$ is a Hausdorff Abelian topological group, a series $\sum x _ { k }$ in $G$ is $\tau$-subseries convergent (respectively, unconditionally convergent) if for each subsequence $\{ x _ { n_k } \}$ (respectively, each permutation $\pi$) of $\{ x_k \}$, the subseries $\sum _ { k = 1 } ^ { \infty } x _ {{ n } _ { k }}$ (respectively, the rearrangement $\sum _ { k = 1 } ^ { \infty } x _ { \pi ( k )}$) is $\tau$-convergent in $G$. In one of the early papers in the history of functional analysis, W. Orlicz showed that if $X$ is a weakly sequentially complete Banach space, then a series in $X$ is weakly unconditionally convergent if and only if the series is norm unconditionally convergent [a5]. Later, he noted that if "unconditional convergence" is replaced by "subseries convergence" , the proof showed that the weak sequential completeness assumption could be dropped. That is, a series in a Banach space is weakly subseries convergent if and only if the series is norm subseries convergent; this result was announced in [a1], but no proof was given. In treating some problems in vector-valued measure and integration theory, B.J. Pettis needed to use this result but noted that no proof was supplied and then proceeded to give a proof ([a6]; the proof is very similar to that of Orlicz). The result subsequently came to be known as the Orlicz–Pettis theorem (see [a3] for a historical discussion).
Since the Orlicz–Pettis theorem has many applications, particularly to the area of vector-valued measure and integration theory, there have been attempts to generalize the theorem in several directions. For example, A. Grothendieck remarked that the result held for locally convex spaces and a proof was supplied by C.W. McArthur. Recent (1998) results have attempted to push subseries convergence to topologies on the space which are stronger than the original topology (for references to these results, see the historical survey of [a4]).
In the case of a Banach space $X$, attempts have been made to replace the weak topology of $X$ by a weaker topology, $\sigma ( X , Y )$, generated by a subspace $Y$ of the dual space of $X$ which separates the points of $X$. Perhaps the best result in this direction is the Diestel–Faires theorem, which states that if $X$ contains no subspace isomorphic to $\text{l} ^ { \infty }$, then a series in $X$ is $\sigma ( X , Y )$ subseries convergent if and only if the series is norm subseries convergent. If $X$ is the dual of a Banach space $Z$ and $Y = Z$, then the converse also holds (see [a2], for references and further results).
J. Stiles gave what is probably the first extension of the Orlicz–Pettis theorem to non-locally convex spaces; namely, he established a version of the theorem for a complete metric linear space with a Schauder basis. This leads to a very general form of the theorem by N. Kalton in the context of Abelian topological groups (see [a4] for references on these and further results).
[a1] S. Banach, "Théoriè des opérations linéaires" , Monogr. Mat. Warsaw (1932)
[a2] J. Diestel, J. Uhl, "Vector measures" , Surveys , 15 , Amer. Math. Soc. (1977)
[a3] W. Filter, I. Labuda, "Essays on the Orlicz–Pettis theorem I" Real Anal. Exch. , 16 (1990/91) pp. 393–403
[a4] N. Kalton, "The Orlicz–Pettis theorem" Contemp. Math. , 2 (1980)
[a5] W. Orlicz, "Beiträge zur Theorie der Orthogonalent wicklungen II" Studia Math. , 1 (1929) pp. 241–255
[a6] B.J. Pettis, "On integration in vector spaces" Trans. Amer. Math. Soc. , 44 (1938) pp. 277–304
Subseries convergence. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Subseries_convergence&oldid=49895
This article was adapted from an original article by Charles W. Swartz (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Subseries_convergence&oldid=49895"
TeX semi-auto | CommonCrawl |
2-Limits and 2-Terminal Objects are too Different
tslil clingman1 &
Lyne Moser ORCID: orcid.org/0000-0001-8296-65942
Applied Categorical Structures volume 30, pages 1283–1304 (2022)Cite this article
In ordinary category theory, limits are known to be equivalent to terminal objects in the slice category of cones. In this paper, we prove that the 2-categorical analogues of this theorem relating 2-limits and 2-terminal objects in the various choices of slice 2-categories of 2-cones are false. Furthermore we show that, even when weakening the 2-cones to pseudo- or lax-natural transformations, or considering bi-type limits and bi-terminal objects, there is still no such correspondence.
Avoid the most common mistakes and prepare your manuscript for journal editors.
In this paper we address the question of whether the natural 2-categorical analogue to the 1-categorical result giving a correspondence between limits and terminal objects in a slice category of cones, holds.
1.1 Motivation, the 1-Dimensional Case and the Case of Cat
A limit of a functor \( {F}:{I}\rightarrow {{\mathcal {C}}} \) comprises the data of an object \(L\in {\mathcal {C}}\) together with a natural transformation \( {\lambda }:{\Delta L}\Rightarrow {F} \), called the limit cone, which satisfies the following universal property: for each \(X\in {\mathcal {C}}\), the map \( {\lambda _{*}\circ \Delta }:{{\mathcal {C}}(X,L)}\rightarrow {\text {\textsf {Cat}}(I,{\mathcal {C}})(\Delta X,F)} \) given by post-composition with \(\lambda \) is an isomorphism of sets. We may also form the slice category \({\Delta }\downarrow {F}\) of cones over F, and it is a folklore result that a limit of F is equivalently a terminal object in \({\Delta }\downarrow {F}\).
As an example, and in progressing up in dimension, let us now consider products in Cat – the category of small categories and functors. The universal property of the product \(({\mathcal {C}}\times {\mathcal {D}},\pi _{\mathcal {C}},\pi _{\mathcal {D}})\) of two categories \({\mathcal {C}}\) and \({\mathcal {D}}\) gives, for each pair of functors \( {F}:{{\mathcal {X}}}\rightarrow {{\mathcal {C}}} \) and \( {G}:{{\mathcal {X}}}\rightarrow {{\mathcal {D}}} \), a functor \( {\left<F,G\right>}:{{\mathcal {X}}}\rightarrow {{\mathcal {C}}\times {\mathcal {D}}} \) unique among those satisfying \(\pi _{{\mathcal {C}}}\left<F,G\right>=F\) and \(\pi _{{\mathcal {D}}}\left<F,G\right>=G\).
However, the category \(\text {\textsf {Cat}}\) has further structure. Indeed, it is a 2-category with 2-morphisms the natural transformations between the functors. This 2-dimensional structure is compatible with the product of categories. More precisely, there is a bijection of natural transformations as depicted below, which is implemented by whiskering with the projection functors.
Observe that the natural transformations \(\alpha \) and \(\beta \) correspond to functors \(\alpha :{\mathcal {X}}\times \mathbb {2}\rightarrow {\mathcal {C}}\) and \(\beta :{\mathcal {X}}\times \mathbb {2}\rightarrow {\mathcal {D}}\), where \(\mathbb {2}\) is the category \(\{0\rightarrow 1\}\). In this light, the bijection (1.2) of natural transformations can be retrieved by applying the universal property of (1.1) to the functors \(\alpha :{\mathcal {X}}\times \mathbb {2}\rightarrow {\mathcal {C}}\) and \(\beta :{\mathcal {X}}\times \mathbb {2}\rightarrow {\mathcal {D}}\).
Taken together, the bijections of (1.1) and (1.2) assemble into an isomorphism of categories
This then is the defining feature of a 2-dimensional limit: there are two aspects of the universal property, one for morphisms and one for 2-morphisms.
In the case of the product above, the indexing category is just a 1-category. Since \(\text {\textsf {Cat}}\) is a 2-category, one could instead consider indexing diagrams by a 2-category I. In order to define a general 2-dimensional limit in \(\text {\textsf {Cat}}\), we need a category of higher morphisms between two 2-functors. This is the category of 2-natural transformations and 3-morphisms, called modifications, between them. With these notions, a 2-limit of a 2-functor \(F:I\rightarrow \text {\textsf {Cat}}\) can be defined as a pair \(({\mathcal {L}},\lambda )\) of a category \({\mathcal {L}}\) and a 2-natural transformation \( {\lambda }:{\Delta {\mathcal {L}}}\Rightarrow {F} \) which are such that post-composition with \(\lambda \) gives an isomorphism of categories
1.2 2-Dimensional Conjectures
A 2-limit of a general 2-functor \( {F}:{I}\rightarrow {{\mathcal {A}}} \) is defined in the same fashion as indicated in (1.3) above; see Definition 2.4. This notion was first introduced, independently, by Auderset [1] and Borceux-Kelly [2], and was further developed by Street [10], Kelly [7, 8] and Lack in [9]. Motivated by the 1-categorical case, it is natural to ask whether 2-limits can be characterised as 2-dimensional terminal objects in the slice 2-category of 2-cones. The appropriate notion of terminality here is that of a 2-terminal object – an object such that every hom-category to this object is isomorphic to the terminal category \(\mathbb {1}\).
Having seen all other concepts involved, let us introduce now the slice 2-category \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over a 2-functor \(F:I\rightarrow {\mathcal {A}}\). This 2-category has as objects pairs \((X,\mu )\) of an object \(X\in {\mathcal {A}}\) together with a 2-cone \(\mu :\Delta X\Rightarrow F\), and as morphisms those morphisms \( {f}:{X}\rightarrow {Y} \) of \({\mathcal {A}}\) making the 2-cones commute.
The 2-morphisms are given by 2-morphisms in \({\mathcal {A}}\) which satisfy a certain whiskering identity.
This slice 2-category seems appropriate to our conjecture since the 1-dimensional aspect of the universal property of a 2-terminal object in there is exactly the same as the 1-dimensional aspect of the universal property of a 2-limit. In the special case of \(\text {\textsf {Cat}}\), by generalising the argument we have seen for products, the 1-dimensional aspect of the universal property of a 2-limit in Cat suffices to reconstruct its 2-dimensional aspect. This holds more broadly in every 2-category admitting tensors by \(\mathbb {2}\), as demonstrated in Proposition 2.11. Given this, we conjecture:
Conjecture 1
Let I and \({\mathcal {A}}\) be 2-categories, and let \( {F}:{I}\rightarrow {{\mathcal {A}}} \) be a 2-functor. Let \(L\in {\mathcal {A}}\) be an object and \( {\lambda }:{\Delta L}\Rightarrow {F} \) be a 2-natural transformation. The following two statements are equivalent:
The pair \((L,\lambda )\) is a 2-limit of the functor F.
The pair \((L,\lambda )\) is a 2-terminal object in the slice 2-category \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over F.
Although it gives the authors no great pleasure to mislead the reader so, such a conjecture is false. While a 2-limit is always a 2-terminal object in the slice 2-category of 2-cones (see Proposition 2.9), the converse is not necessarily true (see Counter-example 2.10). The reason for this failure is that the 2-dimensional aspect of the universal property of a 2-terminal object in the slice 2-category is weaker than the 2-dimensional aspect of the universal property of a 2-limit. This manifests in, among other things, the inability of the slice 2-category to detect enough modifications between two 2-cones with the same summit.
This is not, however, the last word for this conjecture. The theory of 2-categories affords us more room than that of categories to coherently weaken various notions. Instead of considering a 2-category of 2-cones where the morphisms render triangles like that of (1.4) commutative, we are free to ask that the data of a morphism comprises also the data of a general, or perhaps invertible, modification filling that triangle. This leads to the notions of lax-slice and pseudo-slice 2-categories of 2-cones, respectively. Unlike the original, strict, slice 2-category given above, the lax- and pseudo-slice 2-categories detect all, or more, modifications between 2-cones. With this in mind, we might conjecture:
The pair \((L,\lambda )\) is a 2-limit of the 2-functor F.
The pair \((L,\lambda )\) is a 2-terminal object in the lax-slice (or pseudo-slice) 2-category of 2-cones over F.
Unfortunately, this too is incorrect. The failure here is twofold: the pseudo-slice may still fail to detect enough modifications, as before, while simultaneously allowing too many new morphisms to appear. Similar issues plague the lax-slice, and as we will see in Sect. 3, 2-terminal objects in either are generally unrelated to 2-limits.
At this point, it is natural to ask whether the failure of Conjectures 1, 2 has something to do with the rigidity of the notion of 2-limits. We might, for instance, ask that our 2-cones have naturality triangles filled by a general 2-morphism, or perhaps any invertible 2-morphism, not just the identity. This leads us to consider lax-limits and pseudo-limits. One might imagine that these limits have special relationships with the lax- and pseudo-slices of matching cones, respectively. Specifically, one might hope that the peculiarities of these weaker notions of 2-dimensional limit conspire somehow to support the following conjectures.
Let I and \({\mathcal {A}}\) be 2-categories, and let \( {F}:{I}\rightarrow {{\mathcal {A}}} \) be a 2-functor. Let \(L\in {\mathcal {A}}\) be an object and \( {\lambda }:{\Delta L}\Rightarrow {F} \) be a pseudo-natural (resp. lax-natural) transformation. The following two statements are equivalent:
The pair \((L,\lambda )\) is a pseudo-limit (resp. lax-limit) of the functor F.
The pair \((L,\lambda )\) is a 2-terminal object in the strict-/pseudo-/lax-slice 2-category of pseudo-cones (resp. lax-cones) over F.
As in the case of 2-limits, pseudo- and lax-limits are in particular 2-terminal objects in the strict-slice of appropriate cones (see Proposition 4.3). However, all other implications are generally false, as established in Sect. 4.
We might then ask whether the failure of Conjectures 1, 2, 3 has had, all along, something to do with the rigidity of universal properties expressed by isomorphisms of categories. One might, on this view, hope to generate analogous and valid conjectures by weakening these isomorphisms to equivalences of categories – conjectures concerning bi-type limits and bi-terminal objects. However, as discussed in Sect. 5, even these do not hold.
Finally, one might wonder about the case of weighted 2-limits, which is a well-established notion for limits in enriched category theory in the literature. The theory of weighted limits was developed by Auderset [1], Street [10], and Kelly [8] in the case of 2-categories, and by Borceux-Kelly [2] as well as Kelly [7, Chapter 3] in the case of general enriched categories. However, as conical 2-limits, pseudo-limits, and lax-limits are special cases of such weighted 2-limits as noted in [8, §3 and §5], we can see that the analogues of Conjectures 1, 2, 3 for weighted 2-limits must in general fail too.
1.3 Outline
The structure of the paper is as follows. In Sect. 2, we introduce the notions of 2-limits and strict-slices of 2-cones. We prove that a 2-limit is always 2-terminal in the strict-slice of 2-cones, but we provide a counter-example demonstrating that the converse fails in general. However, for the converse to hold, it is sufficient for the ambient 2-category to admit tensors by \(\mathbb {2}\)—as is the case of the 2-category Cat. In Sect. 3, we turn our attention to the larger 2-categories of pseudo- and lax-slices of 2-cones. We provide counter-examples demonstrating that 2-limits are in fact unrelated to 2-terminal objects in these—neither notion generally implies the other. In Sect. 4, we introduce pseudo- and lax-limits, and investigate their relationships with 2-terminal objects in the different slices. Finally, in Sect. 5, we address the case of bi-type limits. We show that these are in particular always bi-terminal in the pseudo-slice of appropriate cones, and then adapt the results we have for the 2-type cases to the bi-type cases.
All of the counter-examples presented in this paper, with the exception of Example 4.5, are indexed by finite (1-)categories: \(\mathbb {2}\) and the pullback shape specifically. These counter-examples were generally constructed to exhibit certain data, for example a modification that is not detected by the strict-slice or a 2-morphism which gives an errant morphism in the lax-slice. The resulting diagram shapes were inessential to this process, and there are certainly many more counter-examples yet.
In the Tables 1 and 2, we summarise our counter-examples and reductions for Conjectures 1, 2, 3—see Sect. 5 for the matching tables for the bi-type conjectures. Only results marked with a \(\checkmark \) are true, everything else establishes a counter-example. Note that the objects in the slices considered vary by the column: the type of objects should match the type of the limit cone.
Table 1 2-type limits which are not 2-terminal
Table 2 2-terminal objects which are not 2-type limits
In consulting these tables, some readers might be confused that the adjectives "pseudo" and "lax" do not appear in the same order in the rows as they appear in the columns. We should be careful to consider that these adjectives play very different roles when attached to the various slice 2-categories as they do when attached to a notion of limit. Adding these adjectives to the labels of the rows changes the morphisms of the slices, while adding these adjectives to the labels of the columns changes the type of the cones, i.e. it changes the objects of the slices. The unexpected ordering of the columns of the tables has been chosen to be this way, since all counter-examples for pseudo-slices are reductions of counter-examples for lax-slices.
1.4 Positive Results for Characterisations of 2-Dimensional Limits
We may always view a 2-category as a horizontal double category with only trivial vertical morphisms, and in the double categorical setting we are now afforded a stronger notion of terminality. In this broader context, Grandis-Paré show in [5, 6] that (weighted) 2-limits of a 2-functor F are equivalently double terminal objects in the double category of (weighted) cones over the horizontal double functor induced by F; see also [4, §5.6]. Similar work in this direction is done by Verity in his thesis [11]. With this proliferation of positive results, it is surprising that the failures of Conjectures 1, 2, 3 are not documented in the literature. Grandis-Paré are certainly aware of such a failure as they write the following in their recent paper [6]:
On the other hand, there seems to be no natural way of expressing the 2-dimensional universal property of weighted (strict or pseudo) limits by terminality in a 2-category.
Unfortunately however, Grandis-Paré do not record their formulation of the "natural way" nor whatever obstacles they encountered. We feel that Conjectures 1, 2, 3 express a natural expectation of the relationship between 2-limits and 2-terminal objects in a 2-category, and we hope that our counter-examples illustrate clearly the failure of all such conjectures.
Closer examination of these counter-examples reveals the need to capture additional information not present in the slice 2-category of cones. The double categorical approach of Grandis, Paré, and Verity certainly suffices for this task, but in our paper [3] we give a purely 2-categorical characterisation of 2-limits by constructing two different slice 2-categories of cones which have the joint property that objects which are simultaneously 2-terminal in both correspond precisely to 2-limits. One of these slice 2-categories is predictably the slice 2-category of cones, but in fact the other slice 2-category alone succeeds in precisely characterising 2-limits through bi-initial objects of a specific form. This second slice 2-category, however, is a shifted version of the usual slice 2-category of cones: its objects are modifications between cones. An advantage of this approach will be highlighted in forthcoming work by the second author, where a notion of \((\infty ,2)\)-limits can then be defined in a fully \((\infty ,2)\)-categorical language without requiring the development of the accompany theory of double \((\infty ,1)\)-categories.
The counter-examples in this paper are indicative of a larger failure in the extension of 1-categorical theorems to the setting of 2-category theory. More generally, the existence and characterisations of bi-limits may be viewed as an instance of the corresponding problems for bi-representations of general pseudo-presheaves, and it is here that the analogy breaks down: while a representation for a presheaf corresponds to an initial object in the category of elements, the data of a bi-representation for a pseudo-presheaf is not wholly captured by a bi-initial object in the 2-category of elements.
At the level of 2-dimensional representations, in [3] we weaken the strict setting to that of pseudo-functors and pseudo-natural transformations and generalise the results of Grandis, Paré, and Verity to the case of bi-representations. In particular we give a double categorical characterisation of bi-representations of pseudo-presheaves in terms of double bi-initial objects in the double category of elements. Furthermore, we succeed in providing a purely 2-categorical characterisation of bi-representations in terms of objects which are simultaneously bi-initial in the familiar 2-category of elements and in a new 2-category of morphisms. In fact, we are able to demonstrate that bi-representations can actually be characterised as bi-initial objects of a specific form in the 2-category of morphisms alone. These results are the content of [3, Theorem 6.8]. The counter-examples of this paper establish the necessity of the presence of both 2-categories in the theorems there, as bi-limits are bi-representations. As a corollary of these theorems we obtain a purely 2-categorical characterisation of weighted bi-limits in [3, Theorem 7.19].
Finally, the positive results of Propositions 2.11, 5.5 are special cases of more general results for bi-representations: [3, Theorem 6.14] shows that in the presence of tensors by \(\mathbb {2}\), if the pseudo-presheaf preserves such tensors, then bi-representations are precisely bi-initial objects in the 2-category of elements.
2 2-Limits do not Correspond to 2-Terminal Objects in the Strict-Slice
In this section, we start by comparing 2-limits with 2-terminal objects in the strict-slice 2-category of 2-cones. After introducing all the terms involved, we show that a 2-limit is in particular a 2-terminal object in the strict-slice, but we provide a counter-example for the other implication. However, when the ambient 2-category admits tensors by \(\mathbb {2}\), such as is the case of \(\text {\textsf {Cat}}\), these two notions do coincide.
A 2-category has not only the structure of a category, with objects and morphisms, but additionally has 2-morphisms between parallel morphisms. These 2-morphisms may be composed both vertically, along a common morphism boundary, and horizontally, along a common object boundary. To differentiate on 2-morphisms, we write \(*\) for horizontal composition and use juxtaposition to denote vertical composition. A 2-functor between 2-categories comprises maps of objects, morphisms, and 2-morphisms strictly compatible with the 2-categorical structures. There are also notions of 2- and 3-morphisms between 2-categories, which we introduce now.
Definition 2.1
Let \(F,G:I\rightarrow {\mathcal {A}}\) be 2-functors. A 2-natural transformation \(\mu :F\Rightarrow G\) comprises the data of a morphism \(\mu _i:Fi\rightarrow Gi\) of \({\mathcal {A}}\) for each \(i\in I\), which must satisfy
for all morphisms \( {f}:{i}\rightarrow {j} \) of I, we have \((Gf)\mu _i=\mu _j(Ff)\), and
for all 2-morphisms \( {\alpha }:{f}\Rightarrow {g} \) of I, we have \(G\alpha *\mu _i=\mu _j*F\alpha \).
Let \( {F,G}:{I}\rightarrow {{\mathcal {A}}} \) be 2-functors and let \(\mu ,\nu :F\Rightarrow G\) be 2-natural transformations. A modification comprises the data of a 2-morphism \(\varphi _i:\mu _i\Rightarrow \nu _i\) for each \(i\in I\), which satisfy \(Gf*\varphi _i=\varphi _j*Ff\), for all morphisms \(f:i\rightarrow j\) of I.
With these definitions, 2-functors, 2-natural transformations, and modifications assemble into a 2-category.
Notation 2.3
Let I and \({\mathcal {A}}\) be 2-categories. We denote by \([I,{\mathcal {A}}]\) the 2-category of 2-functors \(I\rightarrow {\mathcal {A}}\), 2-natural transformations between them, and modifications.
We are now ready to define 2-dimensional limits.
Let I and \({\mathcal {A}}\) be 2-categories, and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. A 2-limit of F comprises the data of an object \(L\in {\mathcal {A}}\) together with a 2-natural transformation \(\lambda :\Delta L\Rightarrow F\), which are such that, for each object \(X\in {\mathcal {A}}\), the functor
$$\begin{aligned} {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {[I,{\mathcal {A}}](\Delta X, F)} \end{aligned}$$
given by post-composition with \(\lambda \) is an isomorphism of categories.
In what follows, we call a 2-natural transformation \(\Delta X\Rightarrow F\) from a constant functor a 2-cone over F.
Remark 2.5
There are two aspects of the universal property of a 2-limit, which arise from the isomorphism of categories \( {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {[I,{\mathcal {A}}](\Delta X, F)} \) at the level of objects and at the level of morphisms. We reformulate this more explicitly as follows. For every \(X\in {\mathcal {A}}\),
for every 2-cone \(\mu :\Delta X\Rightarrow F\), there is a unique morphism \(f_{\mu }:X\rightarrow L\) in \({\mathcal {A}}\) such that \(\lambda \Delta f_{\mu }=\mu \),
for every modification between 2-cones \(\mu ,\nu :\Delta X\Rightarrow F\), there is a unique 2-morphism \(\alpha :f_{\mu }\Rightarrow f_{\nu }\) in \({\mathcal {A}}\) such that \(\lambda *\Delta \alpha =\varphi \).
We now define strict-slice 2-categories of 2-cones and 2-terminal objects.
Let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. The strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over F is defined to be the following pullback in the (1-)category of 2-categories and 2-functors.
This 2-category \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) is given by the following data:
an object in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) is a pair \((X,\mu )\) of an object \(X\in {\mathcal {A}}\) together with a 2-natural transformation \(\mu :\Delta X\Rightarrow F\),
a morphism \(f:(X,\mu )\rightarrow (Y,\nu )\) consists of a morphism \(f:X\rightarrow Y\) in \({\mathcal {A}}\) such that \(\nu \Delta f=\mu \),
a 2-morphism \(\alpha :f\Rightarrow g\) between morphisms \(f,g:(X,\mu )\rightarrow (Y,\nu )\) is a 2-morphism \(\alpha :f\Rightarrow g\) in \({\mathcal {A}}\) such that \(\lambda *\Delta \alpha ={{\,\mathrm{id}\,}}_{\mu }\).
Let \({\mathcal {A}}\) be a 2-category. An object \(L\in {\mathcal {A}}\) is 2-terminal if for all \(X\in {\mathcal {A}}\) there is an isomorphism of categories \({\mathcal {A}}(X,L)\cong \mathbb {1}\).
As for 2-limits, there are also two aspects of the universal property of a 2-terminal object. Since we are interested here by 2-terminal objects in a strict-slice 2-category of 2-cones, we give a more explicit description of their universal property. We will then compare this description with the universal property of 2-limits (c.f. Remark 2.5).
Given a 2-functor \(F:I\rightarrow {\mathcal {A}}\), we describe the two aspects of the universal property of a 2-terminal object in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\). Such a 2-terminal object comprises the data of an object \(L\in {\mathcal {A}}\) together with a 2-cone \(\lambda :\Delta L\Rightarrow F\) which satisfy the following:
for every \(X\in {\mathcal {A}}\) and every 2-cone \(\mu :\Delta X\Rightarrow F\), there is a unique morphism \(f_{\mu }:X\rightarrow L\) in \({\mathcal {A}}\) such that \(\lambda \Delta f_{\mu }=\mu \),
for every \(X\in {\mathcal {A}}\) and every 2-cone \(\mu :\Delta X\Rightarrow F\), the unique 2-morphism \(f_{\mu }\Rightarrow f_{\mu }\) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) is the identity \({{\,\mathrm{id}\,}}_{f_{\mu }}\).
In particular, we can see that the 2-dimensional aspect above seems somehow degenerate in comparison to (2) of Remark 2.5. However, the 1-dimensional aspect is the same as the one expressed in Remark 2.5 (1). This gives the following result.
Proposition 2.9
Let I and \({\mathcal {A}}\) be 2-categories, and let \( {F}:{I}\rightarrow {{\mathcal {A}}} \) be a 2-functor. If \((L,\lambda :\Delta L\Rightarrow F)\) is a 2-limit of F, then \((L,\lambda )\) is 2-terminal in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over F.
By Remark 2.5 (1) and Remark 2.8 (1), we observe that the 1-dimensional aspects of the universal property of a 2-limit and of a 2-terminal object in the strict-slice coincide. Both say that, for every \(X\in {\mathcal {A}}\) and every 2-cone \(\mu :\Delta X\Rightarrow F\), there exists a unique morphism \(f_{\mu }:X\rightarrow L\) in \({\mathcal {A}}\) such that \(\lambda \Delta f_{\mu }=\mu \).
It remains to show (2) of Remark 2.8, that is, that the unique 2-morphism \(f_{\mu }\Rightarrow f_{\mu }\) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) is the identity \({{\,\mathrm{id}\,}}_{f_{\mu }}\). Any 2-morphism \( {\alpha }:{f_{\mu }}\Rightarrow {f_{\mu }} \) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) must satisfy \(\lambda *\Delta \alpha ={{\,\mathrm{id}\,}}_{\lambda \Delta f}\) by Definition 2.6 (iii). In particular, we also have \(\lambda *\Delta {{\,\mathrm{id}\,}}_f={{\,\mathrm{id}\,}}_{\lambda \Delta f}=\lambda *\Delta \alpha \). By the uniqueness in Remark 2.5 (2), it follows that \(\alpha ={{\,\mathrm{id}\,}}_{f}\). \(\square \)
However, it is not true that every 2-terminal object in the strict-slice of 2-cones is a 2-limit. One reason for this is that the strict-slice only sees the identity modifications between 2-cones (compare (2.6) with (2.5)). With this in mind, to illustrate this failure we give an example of a modification between two 2-cones which does not arise from a 2-morphism.
Counter-example 2.10
Let I be the pullback shape diagram \(\{ \bullet \longrightarrow \bullet \longleftarrow \bullet \}\). Let \({\mathcal {A}}\) be the 2-category generated by the data
subject to the relations \(b\lambda _0=c\lambda _1\) and \(b*\gamma _0=c*\gamma _1\). Take \(F:I\rightarrow {\mathcal {A}}\) to be the diagram
The object \((L, {\lambda }:{\Delta L}\Rightarrow {F} )\) is 2-terminal in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over F, but the functor
$$\begin{aligned} {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {[I,{\mathcal {A}}](\Delta X,F)} \end{aligned}$$
given by post-composition with \(\lambda \) is not surjective on morphisms, thus \((L,\lambda )\) is not a 2-limit of F.
The objects of the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) are given by the 2-cones over F:
$$\begin{aligned} (L,\lambda ),\quad (X,\lambda * f), \ \ \text {and} \ \ (X,\lambda * g). \end{aligned}$$
Each of these objects admits precisely one morphism to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) given by
$$\begin{aligned} {{\,\mathrm{id}\,}}_L:&(L,\lambda )\rightarrow (L,\lambda ) \\ f:&(X,\lambda * f)\rightarrow (L,\lambda ) \\ g:&(X,\lambda * g)\rightarrow (L,\lambda ). \end{aligned}$$
There are no non-trivial 2-morphisms to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\), since there are no non-trivial 2-morphisms between X and L in \({\mathcal {A}}\). This proves that \((L,\lambda )\) is 2-terminal in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\).
However, the 2-morphisms \(\gamma _0\) and \(\gamma _1\) give the data of a modification , i.e. a morphism in \([I,{\mathcal {A}}](\Delta X, F)\). But there is no 2-morphism between f and g in \({\mathcal {A}}\) that maps to \(\gamma \) via \( {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {[I,{\mathcal {A}}](\Delta X, F)} \). Hence \((L,\lambda )\) is not the 2-limit of F. \(\square \)
A 2-terminal object in the strict-slice is, however, a 2-limit when the 2-category \({\mathcal {A}}\) admits tensors by the category \(\mathbb {2}=\{0\rightarrow 1\}\). Indeed, it follows from this condition that the 1-dimensional aspect of the universal property of a 2-limit implies the 2-dimensional one (see [8, §3]). A 2-category \({\mathcal {A}}\) is said to admit tensors by a category \({\mathcal {C}}\) when, for each object \(X\in {\mathcal {A}}\), there exists an object \(X\otimes {\mathcal {C}}\in {\mathcal {A}}\) together with isomorphisms of categories
$$\begin{aligned} {\mathcal {A}}(X\otimes {\mathcal {C}}, Y) \mathrel {\cong }\text {\textsf {Cat}}({\mathcal {C}},{\mathcal {A}}(X,Y))\ , \end{aligned}$$
2-natural in \(X,Y\in {\mathcal {A}}\). In particular, this implies that there is a bijection between morphisms \(X\otimes {\mathcal {C}}\rightarrow Y\) in \({\mathcal {A}}\) and functors \({\mathcal {C}}\rightarrow {\mathcal {A}}(X,Y)\).
Proposition 2.11
Suppose \({\mathcal {A}}\) is a 2-category that admits tensors by \(\mathbb {2}\), and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. Then an object \((L,\lambda :\Delta L\Rightarrow F)\) is a 2-terminal object in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) if and only if it is a 2-limit of F.
We already saw one of the implications in Proposition 2.9. Let us prove the other.
Suppose that \((L,\lambda )\) is a 2-terminal object in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\). We show that \((L,\lambda )\) satisfies the two conditions (1) and (2) of Remark 2.5, expressing the two aspects of the universal property of a 2-limit. It is clear that (1) holds since it is the same condition as the one expressing the 1-dimensional aspect of the universal property of a 2-terminal object, as in Remark 2.8 (1). It remains to show (2).
Before proceeding, let us examine the effect of admitting tensors by \(\mathbb {2}\). Let \(X\in {\mathcal {A}}\). The universal property of tensoring by \(\mathbb {2}\) in \({\mathcal {A}}\) gives a canonical bijection between morphisms \(X\otimes \mathbb {2}\rightarrow L\) in \({\mathcal {A}}\) and functors \(\mathbb {2}\rightarrow {\mathcal {A}}(X,L)\), but these functors coincide with 2-morphisms from X to L. The 2-category \([I,{\mathcal {A}}]\) is also tensored by \(\mathbb {2}\), since \({\mathcal {A}}\) is, and the tensor is given object-wise [7, §3.3]. In particular, \(\Delta (X\otimes \mathbb {2})=\Delta X\otimes \mathbb {2}\) as constant functors, by the object-wise definition of tensoring by \(\mathbb {2}\). As \([I,{\mathcal {A}}]\) admits tensors by \(\mathbb {2}\), we have a canonical bijection between 2-cones \(\Delta X\otimes \mathbb {2}\Rightarrow F\) and functors \(\mathbb {2}\rightarrow [I,{\mathcal {A}}](\Delta X,F)\), which in turn coincide with modifications between \(\Delta X\) and F.
By Remark 2.8 (1), for every 2-cone \(\varphi :\Delta X\otimes \mathbb {2}\Rightarrow F\), there exists a unique morphism \(\alpha :X\otimes \mathbb {2}\rightarrow L\) in \({\mathcal {A}}\) such that \(\lambda \Delta \alpha =\varphi \). Using the above, we can reformulate this statement as follows: for every modification \(\varphi \) between 2-cones \(\Delta X\Rightarrow F\), there is a unique 2-morphism \(\alpha \) between morphisms \(X\rightarrow L\) such that \(\lambda \Delta \alpha =\varphi \). But this is exactly (2) of Remark 2.5. \(\square \)
The category \(\text {\textsf {Cat}}\) of categories and functors is cartesian closed. Therefore, it is enriched over itself and so is, in particular, tensored over \(\text {\textsf {Cat}}\). In other words, the 2-category \(\text {\textsf {Cat}}\) of categories, functors, and natural transformations admits tensors by all categories, and these tensors are given by cartesian products. In particular, Proposition 2.11 yields the following result.
Corollary 2.12
Let \(F:I\rightarrow \text {\textsf {Cat}}\) be a 2-functor into \(\text {\textsf {Cat}}\). A pair \(({\mathcal {L}},\lambda :\Delta {\mathcal {L}}\Rightarrow F)\) is a 2-terminal in the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) of 2-cones over F if and only if it is a 2-limit of F.
3 2-Terminal Objects in Lax- and Pseudo-Slices are not Related to 2-Limits
We have seen in Sect. 2 that 2-terminal objects in the strict-slice of 2-cones over a 2-functor do not, in general, succeed in capturing both aspects of the universal properties of 2-limits. In particular, the problem is that the strict-slice of 2-cones does not see the modifications between two 2-cones with same summit. In attempt to rectify this, we might consider richer slice 2-categories containing more data in their morphisms: the lax-slice and the pseudo-slice of 2-cones. However, 2-terminal objects in these new slice 2-categories seem to be unrelated to 2-limits. As we present below, there are 2-limits that are not 2-terminal objects in the lax-slice (resp. pseudo-slice), and conversely so.
We start by introducing lax- and pseudo-natural transformations between 2-functors, and modifications between them.
Let I and \({\mathcal {A}}\) be 2-categories, and let \(F,G:I\rightarrow {\mathcal {A}}\) be 2-functors between them. A lax-natural transformation \(\mu :F\Rightarrow G\) comprises the data of
a morphism \( {\mu _i}:{Fi}\rightarrow {Gi} \), for each \(i\in I\),
a 2-morphism \( {\mu _f}:{(Gf)\mu _{i}}\Rightarrow {\mu _{j}(Ff)} \), for each morphism \(f:i\rightarrow j\) in I,
which satisfy the following conditions:
for all \(i\in I\), \(\mu _{{{\,\mathrm{id}\,}}_i}={{\,\mathrm{id}\,}}_{\mu _i}\),
for all composable morphisms f, g in I, \(\mu _{gf}=(\mu _g*Ff)(Gg*\mu _f)\),
for all 2-morphisms \(\alpha :f\Rightarrow g\) in I, we have that \(\mu _g(G\alpha *\mu _i)=(\mu _j*F\alpha )\mu _f\).
A pseudo-natural transformation is a lax-natural transformation \( {\mu }:{F}\Rightarrow {G} \) whose every 2-morphism component \(\mu _f\) is invertible.
Let \( {F,G}:{I}\rightarrow {{\mathcal {A}}} \) be 2-functors, and let \( {\mu ,\nu }:{F}\Rightarrow {G} \) be lax-natural transformations between them. A modification comprises the data of a 2-morphism \( {\varphi _i}:{\mu _i}\Rightarrow {\nu _{i}} \) for each \(i\in I\), which satisfy \(\nu _f(Gf*\varphi _i)=(\varphi _j*Ff)\mu _f\), for all morphisms \( {f}:{i}\rightarrow {j} \) in I.
Similarly, we have a notion of modification between pseudo-natural transformations.
Note that a 2-natural transformation \( {\mu }:{F}\Rightarrow {G} \) as defined in Definition 2.1 is precisely a lax-natural transformation whose every 2-morphism component \(\mu _f\) is an identity. Moreover, modifications in the sense just defined between two lax-natural transformations which happen to be 2-natural coincide with the modifications of Definition 2.2.
As in the case of 2-natural transformations, lax- and pseudo-natural transformations and modifications assemble into 2-categories whose objects are 2-functors.
Let I and \({\mathcal {A}}\) be 2-categories. We can define two 2-categories whose objects are the 2-functors \(I\rightarrow {\mathcal {A}}\):
the 2-category \({{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}]\), whose 1- and 2-morphisms are lax-natural transformations and modifications,
the 2-category \({{\,\mathrm{Ps}\,}}[I,{\mathcal {A}}]\), whose 1- and 2-morphisms are pseudo-natural transformations and modifications.
The lax-slice and pseudo-slice of 2-cones over a 2-functor \( {F}:{I}\rightarrow {{\mathcal {A}}} \) can be defined as pullbacks, as in Definition 2.6, where we replace the upper-left corner with the 2-categories \({{\,\mathrm{Lax}\,}}[\mathbb {2},[I,{\mathcal {A}}]]\) and \({{\,\mathrm{Ps}\,}}[\mathbb {2},[I,{\mathcal {A}}]]\), respectively. These constructions do not change the objects of the slice, but add more morphisms between them.
Let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. The lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) of 2-cones over F is defined to be the following pullback in the (1-)category of 2-categories and 2-functors.
This 2-category \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) is given by the following data:
an object in \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) is a pair \((X,\mu )\) of an object \(X\in {\mathcal {A}}\) together with a 2-natural transformation \(\mu :\Delta X\Rightarrow F\),
a morphism \((f,\varphi ):(X,\mu )\rightarrow (Y,\nu )\) consists of a morphism \(f:X\rightarrow Y\) in \({\mathcal {A}}\) together with a modification ,
a 2-morphism \(\alpha :(f,\varphi )\Rightarrow (g,\psi )\) between morphisms \((f,\varphi ),(g,\psi ):(X,\mu )\rightarrow (Y,\nu )\) is a 2-morphism \(\alpha :f\Rightarrow g\) in \({\mathcal {A}}\) such that \(\psi (\nu *\Delta \alpha )=\varphi \).
Similarly, we can define the pseudo-slice \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\) of 2-cones over F by replacing the upper-left corner \({{\,\mathrm{Lax}\,}}[\mathbb {2},[I,{\mathcal {A}}]]\) in the pullback above with \({{\,\mathrm{Ps}\,}}[\mathbb {2},[I,{\mathcal {A}}]]\). The pseudo-slice corresponds to the sub-2-category of the lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) containing all objects and only the morphisms \((f,\varphi )\) for which the modification \(\varphi \) is invertible, and which is locally-full on 2-morphisms.
Note that the strict-slice \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\) as defined in Definition 2.6 corresponds to the locally-full sub-2-category of the lax- or pseudo-slice containing all objects and only the morphisms \((f,\varphi )\) for which the modification \(\varphi \) is an identity.
We now give two counter-examples which show that
not every 2-limit is 2-terminal in the lax-slice of 2-cones (Counter-ex. 3.7),
not every 2-terminal object in the lax-slice of 2-cones is a 2-limit (Counter-ex. 3.9).
These statements imply that, unlike in the case of strict-slices, 2-terminal objects in the lax-slice are not at all related to 2-limits. We derive counter-examples to show that the same is true for pseudo-slices, namely that 2-terminal objects in the pseudo-slice of 2-cones are not related to 2-limits.
We first give an example of a 2-limit that is not 2-terminal in the lax-slice of 2-cones. To illustrate this failure we seek a case where the lax-slice sees too many morphisms between the 2-cones. In the counter-example below, we show that a 2-morphism that is part of the 2-dimensional aspect of the universal property of a 2-limit might create undesirable morphisms in the lax-slice of 2-cones.
Counter-example 3.7
subject to the relation \(b\lambda _0=c\lambda _1\). Take \(F:I\rightarrow {\mathcal {A}}\) to be the diagram
The object \((L, {\lambda }:{\Delta L}\Rightarrow {F} )\) is the 2-limit of F, but it is not 2-terminal in the lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) of 2-cones over F.
Let us begin by enumerating all the 2-cones over F:
We can see that \((L,\lambda )\) is a 2-limit of F, since we have
and \({\mathcal {A}}(L,L)=\{{{\,\mathrm{id}\,}}_L\}\) and \([I,{\mathcal {A}}](\Delta L,F)=\{\lambda \}\).
However, there are two distinct morphisms from \((X,\lambda *g)\) to \((L,\lambda )\) in the lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\), which are given by
$$\begin{aligned} (g,({{\,\mathrm{id}\,}}_{\lambda _0g},{{\,\mathrm{id}\,}}_{\lambda _1g})):&(X,\lambda *g)\rightarrow (L,\lambda ) \\ (f,(\lambda _0*\alpha ,\lambda _1*\alpha )) :&(X,\lambda *g)\rightarrow (L,\lambda ). \end{aligned}$$
Therefore \((L,\lambda )\) is not 2-terminal in \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\). \(\square \)
Reduction 3.8
By requiring \(\alpha \) to be invertible in Counter-example 3.7, we can similarly show that \((L,\lambda )\) is the 2-limit of F, but is not 2-terminal in the pseudo-slice \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\) of 2-cones over F.
Next we give an example of a 2-terminal object in the lax-slice of 2-cones that is not a 2-limit. This counter-example is designed to capture a particular arrangement of two 2-cones over a 2-functor together with a single non-trivial modification between them. This modification gives rise to a morphism in the lax-slice between these two 2-cones, exhibiting the target 2-cone as 2-terminal in the lax-slice. However, the source 2-cone is not in the image of the post-composition functor by the target 2-cone, which shows that the latter is not a 2-limit.
subject to the relations \(b\lambda _0=c\lambda _1\) and \(b\alpha _0=c\alpha _1\). Take \(F:I\rightarrow {\mathcal {A}}\) to be the diagram
The object \((L, {\lambda }:{\Delta L}\Rightarrow {F} )\) is 2-terminal in the lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) of 2-cones over F, but the functor
given by post-composition with \(\lambda \) is not surjective on objects, thus \((L,\lambda )\) is not a 2-limit of F.
The objects of the lax-slice \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) are given by the 2-cones over F:
$$\begin{aligned} (L,\lambda ),\quad (X,\alpha ), \ \ \text {and} \ \ (X,\lambda * f). \end{aligned}$$
Each of these objects admits precisely one morphism to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\) given by
$$\begin{aligned} ({{\,\mathrm{id}\,}}_L,{{\,\mathrm{id}\,}}_{\lambda _0},{{\,\mathrm{id}\,}}_{\lambda _1}):&(L,\lambda )\rightarrow (L,\lambda ) \\ (f,\gamma _0,\gamma _1):&(X,\alpha )\rightarrow (L,\lambda ) \\ (f,{{\,\mathrm{id}\,}}_{\lambda _0 f},{{\,\mathrm{id}\,}}_{\lambda _1 f}):&(X,\lambda * f)\rightarrow (L,\lambda ). \end{aligned}$$
There are no non-trivial 2-morphisms to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny }}_{s}{F}\), since there are no non-trivial 2-morphisms between X and L in \({\mathcal {A}}\). This proves that \((L,\lambda )\) is 2-terminal in \({\Delta }\downarrow ^{\text {\tiny }}_{l}{F}\).
However, the 2-cone \( {\alpha }:{\Delta X}\Rightarrow {F} \) is an object of \([I,{\mathcal {A}}](\Delta X,F)\), but it is not in the image of \(\lambda _*\circ \Delta \). \(\square \)
Reduction 3.10
By requiring \(\gamma _0\) and \(\gamma _1\) to be invertible in Counter-example 3.9, we can similarly show that \((L,\lambda )\) is 2-terminal in the pseudo-slice \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\) of 2-cones over F, but it is not a 2-limit of F.
4 The Cases of Pseudo- and Lax-Limits
Recall that part of the definition of a 2-limit involved 2-cones which were 2-natural transformations. In Sect. 3, we presented weaker notions of 2-dimensional natural transformations, namely pseudo- and lax-natural transformations. These give other ways of taking a 2-dimensional limit of a 2-functor, by changing the shape of the 2-dimensional cones. The corresponding notions are called pseudo-limits and lax-limits.
In this section, we show that results similar to those we have seen in Sects. 2, 3 hold for pseudo- and lax-limits. In the lax-limit case, we show that:
every lax-limit is 2-terminal in the strict-slice of lax-cones (Remark 4.3),
not every 2-terminal object in the strict-slice of lax-cones is a lax-limit (Counter-ex. 4.5),
not every lax-limit is 2-terminal in the lax-slice of lax-cones (Counter-ex. 4.10),
not every 2-terminal object in the lax-slice of lax-cones is a lax-limit (Counter-ex. 4.14).
From the last two, we also derive the result that lax-limits are not related to 2-terminal objects in the pseudo-slice of lax-cones. With all of the results and counter-examples we have established thus far, we are able to derive proofs and counter-examples covering the conjectures related to pseudo-limits.
We first introduce the notions of pseudo- and lax-limits.
Let I and \({\mathcal {A}}\) be 2-categories, and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor.
A pseudo-limit of F comprises the data of an object \(L\in {\mathcal {A}}\) together with a pseudo-natural transformation \(\lambda :\Delta L\Rightarrow F\) such that, for each object \(X\in {\mathcal {A}}\), the functor
$$\begin{aligned} {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {{{\,\mathrm{Ps}\,}}[I,{\mathcal {A}}](\Delta X, F)} \end{aligned}$$
A lax-limit of F comprises the data of an object \(L\in {\mathcal {A}}\) together with a lax-natural transformation \(\lambda :\Delta L\Rightarrow F\) such that, for each object \(X\in {\mathcal {A}}\), the functor
$$\begin{aligned} {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {{{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}](\Delta X, F)} \end{aligned}$$
In order to consider slices in which these pseudo- and lax-limit cones live, we need to change the shape of the cone objects of the slices considered in Definitions 2.6, 3.5.
We can also define the strict-, pseudo-, and lax-slices of pseudo-cones (resp. lax-cones) over F, by considering objects of the form \((X,\mu )\) where \( {\mu }:{\Delta X}\Rightarrow {F} \) is a pseudo-natural (resp. lax-natural) transformation. These constructions can be achieved by replacing \([I,{\mathcal {A}}]\) in the pullbacks of Definitions 2.6, 3.5 with \({{\,\mathrm{Ps}\,}}[I,{\mathcal {A}}]\) (resp. \({{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}]\)). For example, the pseudo-slice of lax-cones is the following pullback.
There is an analogue of Proposition 2.9 in the case of pseudo-limits (resp. lax-limits), whose proof may be derived by replacing 2-natural transformations with pseudo-natural ones (resp. lax-natural ones).
A pseudo-limit (resp. lax-limit) of a 2-functor is 2-terminal in the strict-slice of pseudo-cones (resp. lax-cones).
However, not every 2-terminal object in the strict-slice of pseudo- or lax-cones is a pseudo- or lax-limit. In particular, Counter-example 2.10 exhibits such an object in the pseudo-limit case:
Since there are no invertible 2-morphisms in Counter-example 2.10, this is also an example of a 2-terminal object in the strict-slice of pseudo-cones that is not a pseudo-limit.
Let us recall that Counter-example 2.10 has 2-morphisms \(\gamma _0\) and \(\gamma _1\) that introduce two additional lax-cones with summit X over F. These new lax-cones do not admit a morphism to \((L,\lambda )\) in the strict-slice of lax-cones. This arrangement demonstrates that \((L,\lambda )\) is not 2-terminal in the strict-slice of lax-cones. Therefore, we can not use this counter-example for the lax-limit case.
The issue at heart here is that modifications between lax-cones over a 2-functor may be turned into lax-cones over this same 2-functor. Thus, to find an example of a 2-terminal object in the strict-slice of lax-cones that is not a lax-limit, we must find a case where such a transformation is not possible. In order to create such an example, we need the diagram shape to have objects that are both the source and the target of a non-trivial morphism.
Let I be the 2-category freely generated by the data
i.e. the non-trivial morphisms in I are given by all possible composites of x and y, e.g. xyxy. Let \({\mathcal {A}}\) be the 2-category freely generated by the data
subject to the relations \((\lambda _x*g)(a*\gamma _0)=\gamma _1(\lambda _x*f)\), and \((\lambda _y*g)(b*\gamma _1)=\gamma _0(\lambda _y*f)\). Again, we have all possible composites of the morphisms a and b in \({\mathcal {A}}\), and all possible pastings of the 2-morphisms \(\lambda _x\) and \(\lambda _y\). Take \(F:I\rightarrow {\mathcal {A}}\) to be the diagram defined on the generators x and y of I by
Note that the morphisms \(\lambda _0:L\rightarrow A\) and \(\lambda _1:L\rightarrow B\) together with the 2-morphisms \(\lambda _x:a\lambda _0\Rightarrow \lambda _1\) and \(\lambda _y:b\lambda _1\Rightarrow \lambda _0\) suffice to give the data of a lax-natural transformation \(\lambda :\Delta L\Rightarrow F\). Indeed, by Definition 3.1 (2), the 2-morphism component of \(\lambda \) at some composite of x and y is determined by the corresponding pasting of the 2-morphism components \(\lambda _x\) and \(\lambda _y\).
The object \((L,\lambda :\Delta L\Rightarrow F)\) is 2-terminal in the strict-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{s}{F}\) of lax-cones over F, but the functor
$$\begin{aligned} {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {{{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}](\Delta X,F)} \end{aligned}$$
given by post-composition with \(\lambda \) is not surjective on morphisms, thus \((L,\lambda )\) is not a lax-limit of F.
The objects of the strict-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{s}{F}\) are given by the lax-cones over F:
$$\begin{aligned} (L,\lambda ),\quad (X,\lambda *f), \ \ \text {and} \ \ (X,\lambda *g). \end{aligned}$$
Note that the 2-morphisms \(\gamma _0\) and \(\gamma _1\) do not induce lax-cones over F with summit X, since there are no 2-morphisms from \(\lambda _1 g\) to \(\lambda _0 f\), and from \(\lambda _0 g\) to \(\lambda _1 f\) in \({\mathcal {A}}\), respectively. There are also no lax-cones over F with summit A or B since there are no non-trivial 2-morphisms between any two composites of a and b in \({\mathcal {A}}\). Each of the objects above admits precisely one morphism to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny lx}}_{s}{F}\) given by
$$\begin{aligned} {{\,\mathrm{id}\,}}_L:&(L,\lambda ) \rightarrow (L,\lambda ) \\ f:&(X,\lambda *f)\rightarrow (L,\lambda ) \\ g:&(X,\lambda *g)\rightarrow (L,\lambda ). \end{aligned}$$
There are no non-trivial 2-morphisms to \((L,\lambda )\) in \({\Delta }\downarrow ^{\text {\tiny lx}}_{s}{F}\), since there are no non-trivial 2-morphisms between X and L in \({\mathcal {A}}\). This proves that \((L,\lambda )\) is 2-terminal in \({\Delta }\downarrow ^{\text {\tiny lx}}_{s}{F}\).
However, the 2-morphisms \(\gamma _0\) and \(\gamma _1\) give the data of a modification , i.e. a morphism in \({{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}](\Delta X, F)\). But there is no 2-morphism between f and g in \({\mathcal {A}}\) that maps to \(\gamma \) via \( {\lambda _*\circ \Delta }:{{\mathcal {A}}(X,L)}\rightarrow {{{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}](\Delta X, F)} \). Hence \((L,\lambda )\) is not the lax-limit of F. \(\square \)
Note that, in the above counter-example, it is essential to have all free composites of x and y in I, and also of a and b in \({\mathcal {A}}\). Indeed, if we impose any conditions on the composites of x and y, e.g. x and y are mutual inverses, then the 2-functor F must preserve these, and these new conditions on a and b in \({\mathcal {A}}\) add undesirable lax-cones with summits A and B.
However, when the 2-category \({\mathcal {A}}\) admits tensors by \(\mathbb {2}\), there is an analogue of Proposition 2.11 in the pseudo-limit (resp. lax-limit) case, whose proof may be derived by replacing 2-natural transformations by pseudo-natural ones (resp. lax-natural ones).
Suppose \({\mathcal {A}}\) is a 2-category that admits tensors by \(\mathbb {2}\), and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. Then an object is 2-terminal in the strict-slice of pseudo-cones (resp. lax-cones) over F if and only if it is a pseudo-limit (resp. lax-limit) of F.
We dedicate the rest of the section to exploring counter-examples which together refute all the remaining conjectures relating pseudo- and lax-limits to the lax- and pseudo-slices of appropriate cones. We begin by recalling Counter-example 3.7, which also shows that not every pseudo-limit is a 2-terminal object in the lax-slice of pseudo-cones:
Since there are no invertible 2-morphisms in Counter-example 3.7, this is also an example of a pseudo-limit that is not 2-terminal in the lax-slice of pseudo-cones.
Let us recall that Counter-example 3.7 has a 2-morphism \(\alpha \). This 2-morphism was an obstruction to \((L,\lambda )\) being 2-terminal in the lax-slice, but not to being a 2-limit. In the move to lax-cones, however, this 2-morphism introduces an additional lax-cone over F with summit X that is not in the image of \(\lambda _*\circ \Delta \). Therefore, \((L,\lambda )\) cannot be a lax-limit of F, and we need a new counter-example for the lax-limit case.
Our new counter-example should have a non-trivial 2-morphism to serve as an obstruction to 2-terminality in the lax-slice. But, to ensure that this 2-morphism is in the image of post-composition by the to-be lax-limit cone, we must also introduce new relations.
Let \(I=\mathbb {2}\), and let \({\mathcal {A}}\) be the 2-category generated by the data
subject to the relations \(f\alpha _{0}=f\alpha _{1}\) and \(f*\alpha ={{\,\mathrm{id}\,}}_{f\alpha _{0}}\). Consider the 2-functor \( {f}:{\mathbb {2}}\rightarrow {{\mathcal {A}}} \) given by the morphism \( {f}:{A}\rightarrow {B} \).
The object \((A, {{{\,\mathrm{id}\,}}_f}:{\Delta A}\Rightarrow {f} )\) is the lax-limit of f in \({\mathcal {A}}\), but it is not 2-terminal in the lax-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\) of lax-cones over f.
Let us begin by enumerating all the lax-cones over f:
$$\begin{aligned} (A,{{\,\mathrm{id}\,}}_{f}),\quad (X,{{\,\mathrm{id}\,}}_{f\alpha _{0}}),\ \ \text {and}\ \ (X,{{\,\mathrm{id}\,}}_{f\alpha _{1}})\ .\end{aligned}$$
Note that the last two above-listed objects differ in their cone components to A: in the first case the leg is \(\alpha _0\), and in the second case it is \(\alpha _1\).
We can see that \((A,{{\,\mathrm{id}\,}}_f)\) is a lax-limit of f, since we have
and \({\mathcal {A}}(A,A)=\{{{\,\mathrm{id}\,}}_A\}\) and \([\mathbb {2},{\mathcal {A}}](\Delta A,f)=\{{{\,\mathrm{id}\,}}_f\}\).
However, there are two distinct morphisms from \((X,{{\,\mathrm{id}\,}}_{f\alpha _{1}})\) to \((A,{{\,\mathrm{id}\,}}_{f})\) in the lax-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\) of lax-cones, given by
$$\begin{aligned} (\alpha _{1},({{\,\mathrm{id}\,}}_{\alpha _{1}},{{\,\mathrm{id}\,}}_{f\alpha _{1}})):&(X,{{\,\mathrm{id}\,}}_{f\alpha _{1}}) \rightarrow (A,{{\,\mathrm{id}\,}}_{f})\\ (\alpha _{0},(\alpha ,{{\,\mathrm{id}\,}}_{f\alpha _1})) :&(X,{{\,\mathrm{id}\,}}_{f\alpha _{1}}) \rightarrow (A,{{\,\mathrm{id}\,}}_{f}). \end{aligned}$$
where we have used the fact \(f*\alpha ={{\,\mathrm{id}\,}}_{f\alpha _{1}}\) in displaying the latter morphism. Therefore, \((A,{{\,\mathrm{id}\,}}_f)\) is not 2-terminal in the lax-slice of lax-cones over f. \(\square \)
Remark 4.11
Note that the object \((A,{{\,\mathrm{id}\,}}_f)\) is also a pseudo-limit of f. This gives a second example of a pseudo-limit that is not 2-terminal in the lax-slice of pseudo-cones.
By requiring \(\alpha \) to be invertible in Counter-example 4.10, we can similarly show that \((A,{{\,\mathrm{id}\,}}_{f})\) is a lax-limit (resp. pseudo-limit) of f, which is not 2-terminal in the pseudo-slice of lax-cones (resp. pseudo-cones).
Moreover, one can derive from Counter-example 3.9 that not every 2-terminal object in the lax-slice of pseudo-cones is a pseudo-limit:
Since there are no invertible 2-morphisms in Counter-example 3.9, this is also an example of a 2-terminal object in the lax-slice of pseudo-cones that is not a pseudo-limit.
Counter-example 3.9 in fact also applies in the lax-cone case. Although the computations are more involved, one can check this is also an example of a 2-terminal object in the lax-slice of lax-cones that it is not a lax-limit. However, there is a more striking example for this case. When considering diagrams of shape \(\mathbb {2}\), it turns out that even a single non-trivial lax-cone over a morphism exhibits a 2-terminal object in the lax-slice of lax-cones that is not a lax-limit of the morphism.
Consider the 2-functor \( {f}:{\mathbb {2}}\rightarrow {{\mathcal {A}}} \) given by the morphism \( {f}:{A}\rightarrow {B} \).
The object \((A, {{{\,\mathrm{id}\,}}_f}:{\Delta A}\Rightarrow {f} )\) is 2-terminal in the lax-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\) of lax-cones over f, but the functor
$$\begin{aligned} {({{\,\mathrm{id}\,}}_{f})_*\circ \Delta }:{{\mathcal {A}}(X,A)}\rightarrow {{{\,\mathrm{Lax}\,}}[\mathbb {2},{\mathcal {A}}](\Delta X,f)} \end{aligned}$$
is not surjective on objects, thus \((A,{{\,\mathrm{id}\,}}_f)\) is not a lax-limit of f.
The objects of the lax-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\) are the following lax-cones over f:
$$\begin{aligned} (A,{{\,\mathrm{id}\,}}_f),\quad (X,{{\,\mathrm{id}\,}}_{f\alpha _0}), \ \ \text {and} \ \ (X,\alpha ). \end{aligned}$$
Each of these objects admits precisely one morphism to \((A,{{\,\mathrm{id}\,}}_f)\) in \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\), given by
$$\begin{aligned} ({{\,\mathrm{id}\,}}_A,({{\,\mathrm{id}\,}}_A,{{\,\mathrm{id}\,}}_f)):&(A,{{\,\mathrm{id}\,}}_f)\rightarrow (A,{{\,\mathrm{id}\,}}_f) \\ (\alpha _0,({{\,\mathrm{id}\,}}_{\alpha _0},{{\,\mathrm{id}\,}}_{f\alpha _0})):&(X,{{\,\mathrm{id}\,}}_{f\alpha _0})\rightarrow (A,{{\,\mathrm{id}\,}}_f)\\ (\alpha _0,({{\,\mathrm{id}\,}}_{\alpha _0},\alpha )):&(X,\alpha )\rightarrow (A,{{\,\mathrm{id}\,}}_f). \end{aligned}$$
As there are no non-trivial 2-morphisms between X and A in \({\mathcal {A}}\), there are no non-trivial 2-morphisms to \((A,{{\,\mathrm{id}\,}}_f)\) in \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\). This shows that \((A,{{\,\mathrm{id}\,}}_f)\) is 2-terminal in the lax-slice \({\Delta }\downarrow ^{\text {\tiny lx}}_{l}{f}\) of lax-cones.
Next, observe that the lax-cone \( {\alpha }:{f\alpha _{0}}\Rightarrow {\alpha _{1}} \) is an object of \({{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}](\Delta X,f)\), but it is not in the image of \(({{\,\mathrm{id}\,}}_f)_*\circ \Delta \). Hence \((A,{{\,\mathrm{id}\,}}_f)\) is not a lax-limit of f. \(\square \)
By requiring \(\alpha \) to be invertible in Counter-example 4.14, we can similarly show that \((A,{{\,\mathrm{id}\,}}_f)\) is 2-terminal in the pseudo-slice of lax-cones (resp. pseudo-cones) over f, but that it is not a lax-limit (resp. pseudo-limit) of f.
5 Bi-Type Limits for the Completionist
At this point we have seen that 2-terminal objects in all slices and 2-dimensional limits do not generally align. In this last section, which we present for completeness, we address one final weakening of the central definitions we have thus far considered. In defining 2-dimensional limits with various strengths of cones, we have always asked for an isomorphism of categories to govern the universal property. However, we might seek to relax this requirement by asking instead that the relevant functor induces only an equivalence of categories. This leads to the following definitions.
Let I and \({\mathcal {A}}\) be 2-categories, and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. A bi-limit of F comprises the data of an object \(L\in {\mathcal {A}}\) together with a 2-natural transformation \(\lambda :\Delta L\Rightarrow F\) which are such that, for each object \(X\in {\mathcal {A}}\), the functor
given by post-composition with \(\lambda \) is an equivalence of categories.
Similarly, we can define pseudo-bi-limit (resp. lax-bi-limit) by replacing \([I,{\mathcal {A}}]\) in the above with \({{\,\mathrm{Ps}\,}}[I,{\mathcal {A}}]\) (resp. \({{\,\mathrm{Lax}\,}}[I,{\mathcal {A}}])\).Footnote 1
The two aspects of a universal property of a bi-limit may be reformulated more explicitly by expanding the content of the equivalence of categories above. For every \(X\in {\mathcal {A}}\),
for every 2-cone \(\mu :\Delta X\Rightarrow F\), there is a morphism \(f:X\rightarrow L\) in \({\mathcal {A}}\) and an invertible modification ,
for all morphisms \(f,g:X\rightarrow L\), and for every modification , there is a unique 2-morphism \(\alpha :f\Rightarrow g\) in \({\mathcal {A}}\) such that \(\lambda *\Delta \alpha =\Theta \).
Let \({\mathcal {A}}\) be a 2-category. An object \(L\in {\mathcal {A}}\) is bi-terminal if for all \(X\in {\mathcal {A}}\) there is an equivalence of categories \({\mathcal {A}}(X,L)\mathrel {\simeq }\mathbb {1}\).
In formulating analogous conjectures for the bi-limit and bi-terminal cases, we should pay careful attention to Remark 5.2 (1). Observe that, given a 2-cone \( {\mu }:{\Delta X}\Rightarrow {F} \), the 1-dimensional aspect of the universal property of a bi-limit \((L,\lambda )\) gives only a morphism \((X,\mu )\rightarrow (L,\lambda )\) of the pseudo-slice of 2-cones \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\). It is thus inappropriate to look at bi-terminal objects in the strict-slice of 2-cones when attempting to recover a general bi-limit.
Much as was the case for 2-limits, bi-limits are in general bi-terminal objects in the pseudo-slice of 2-cones. In fact, all of the positive results of Sect. 2 follows in this context. We defer all proofs to the paper [3] which deals with such relationships in greater generality. The first result can be deduced from [3, Corollary 7.22] and the second is [3, Corollary 7.25].
Let I and \({\mathcal {A}}\) be 2-categories, and let \( {F}:{I}\rightarrow {{\mathcal {A}}} \) be a 2-functor. If \((L,\lambda :\Delta L\Rightarrow F)\) is a bi-limit of F, then \((L,\lambda )\) is bi-terminal in the pseudo-slice \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\) of 2-cones over F.
Suppose \({\mathcal {A}}\) is a 2-category that admits tensors by \(\mathbb {2}\), and let \(F:I\rightarrow {\mathcal {A}}\) be a 2-functor. Then an object is bi-terminal in the pseudo-slice \({\Delta }\downarrow ^{\text {\tiny }}_{p}{F}\) of 2-cones over F if and only if it is a bi-limit of F.
Proposition 5.4, 5.5 also hold true when the 2-cones are replaced by pseudo- and lax-cones.
With every sunrise there is a sunset, and just as the positive results extended themselves to this weaker context, so too do the negative. Since isomorphims of categories are, in particular, equivalences of categories, a 2-type limit or a 2-terminal object is, in particular, a bi-type limit or a bi-terminal object. Moreover, an examination of all of the counter-examples and reductions referenced in the tables below shows that there are no non-trivial invertible 2-morphisms in the 2-categories involved. This allows us to deduce the following facts. First, the notions of 2-type limits (resp. 2-terminal objects) and bi-type limits (resp. bi-terminal objects) in each of these counter-examples coincide. Second, for each of Counter-examples 2.10, 4.5, the strict-slice and pseudo-slice coincide. Therefore, all counter-examples and reductions for 2-type limits and 2-terminality seen in previous sections are also counter-examples and reductions for bi-type limits and bi-terminality, as summarised in the Tables 3 and 4.
Table 3 Bi-type limits which are not bi-terminal
Table 4 Bi-terminal objects which are not bi-type limits
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Note that what we call bi-limit here does not match the typical notion of bi-limit present in the literature, which is usually considered in a weaker context than that of (strict) 2-natural transformations. Pseudo-bi-limits therefore coincide with the usual notion of bi-limits.
Auderset, C.: Adjonctions et monades au niveau des \(2\)-catégories. Cahiers Topologie Géom. Différentielle 15, 3–20 (1974)
Borceux, F., Kelly, G.M.: A notion of limit for enriched categories. Bull. Aust. Math. Soc. 12, 49–72 (1975)
Article MathSciNet MATH Google Scholar
clingman, t., Moser, L.: Bi-initial objects and bi-representations are not so different. Cahiers Topologie Géom. Différentielle Catég. 63(3), 259–330 (2022)
Grandis, M.: Higher dimensional categories. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2020). From double to multiple categories
Grandis, M., Paré, R.: Limits in double categories. Cahiers Topologie Géom. Différentielle Catég. 40(3), 162–220 (1999)
Grandis, M., Paré, R.: Persistent double limits and flexible weighted limits. https://www.mscs.dal.ca/~pare/DblPrs2.pdf (2019)
Kelly, G.M.: Basic Concepts of Enriched Category Theory. London Mathematical Society Lecture Note Series, vol. 64. Cambridge University Press, Cambridge (1982)
Kelly, G.M.: Elementary observations on \(2\)-categorical limits. Bull. Aust. Math. Soc. 39(2), 301–317 (1989)
Lack, S.: A 2-categories companion. In Towards higher categories, volume 152 of IMA Vol. Math. Appl., pp. 105–191. Springer, New York (2010)
Street, R.: Limits indexed by category-valued \(2\)-functors. J. Pure Appl. Algebra 8(2), 149–181 (1976)
Verity, D.: Enriched categories, internal categories and change of base. Repr. Theory Appl. Categ. 20, 1–266 (2011)
Both authors are indebted to Emily Riehl for her close readings of and thoughtful inputs on several early drafts of this paper. In addition, both authors are grateful to Jérôme Scherer for his careful input on an early draft. Finally, both authors also wish to extend their gratitude to Alexander Campbell and Emily Riehl for their enthusiasm for what became Counter-example 4.14, which provided the impetus to write this paper.
Open Access funding enabled and organized by Projekt DEAL. This work was realised while both authors were at the Mathematical Sciences Research Institute in Berkeley, California, during the Spring 2020 semester. The first-named author benefited from support by the National Science Foundation under Grant No. DMS-1440140, while at residence in MSRI. The second-named author was supported by the Swiss National Science Foundation under the Project P1ELP2_188039. The first-named author was additionally supported by the National Science Foundation Grant DMS-1652600, as well as the JHU Catalyst Grant.
Johns Hopkins University, 3400 N. Charles St., Baltimore, MD, USA
tslil clingman
Max Planck Institute for Mathematics, Vivatsgasse 7, 53111, Bonn, Germany
Lyne Moser
Correspondence to Lyne Moser.
The authors have no relevant financial or non-financial interests to disclose.
Communicated by Nicola Gambino.
clingman, t., Moser, L. 2-Limits and 2-Terminal Objects are too Different. Appl Categor Struct 30, 1283–1304 (2022). https://doi.org/10.1007/s10485-022-09691-z
2-Dimensional limits
2-Dimensional terminal objects
Slice 2-categories
Mathematics Subject Classification
18A30 | CommonCrawl |
Implementation of the vehicular occupancy-emission relation using a cubic B-splines collocation method
DCDS-S Home
A new numerical scheme applied on re-visited nonlinear model of predator-prey based on derivative with non-local and non-singular kernel
doi: 10.3934/dcdss.2020023
New aspects of time fractional optimal control problems within operators with nonsingular kernel
Tuğba Akman Yıldız 1,, , Amin Jajarmi 2, , Burak Yıldız 3,†, and Dumitru Baleanu 4,5,6,
Department of Logistics Management, University of Turkish Aeronautical Association, 06790 Ankara, Turkey
Department of Electrical Engineering, University of Bojnord, Bojnord, Iran
Hurma Mah., 252. Sokak, 2/5, Konyaaltı, Antalya, Turkey
Department of Mathematics, Çankaya University, 06530, Ankara, Turkey
Institute of Soft Matter Mechanics, Department of Engineering Mechanics, Hohai University, Nanjing, Jiangsu 210098, China
Institute of Space Sciences, Magurele-Bucharest 077125, Romania
* Corresponding author: Tuğba Akman Yıldız
† PhD graduate from Department of Mathematics, Middle East Technical University, Ankara, Turkey
Received June 2018 Revised September 2018 Published March 2019
This paper deals with a new formulation of time fractional optimal control problems governed by Caputo-Fabrizio (CF) fractional derivative. The optimality system for this problem is derived, which contains the forward and backward fractional differential equations in the sense of CF. These equations are then expressed in terms of Volterra integrals and also solved by a new numerical scheme based on approximating the Volterra integrals. The linear rate of convergence for this method is also justified theoretically. We present three illustrative examples to show the performance of this method. These examples also test the contribution of using CF derivative for dynamical constraints and we observe the efficiency of this new approach compared to the classical version of fractional operators.
Keywords: Optimal control, nonsingular kernel, fractional calculus, error estimates, Volterra integrals.
Mathematics Subject Classification: Primary: 49K99, 34A08; Secondary: 34H05, 49M25.
Citation: Tuğba Akman Yıldız, Amin Jajarmi, Burak Yıldız, Dumitru Baleanu. New aspects of time fractional optimal control problems within operators with nonsingular kernel. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020023
T. Abdeljawad, Fractional operators with exponential kernels and a Lyapunov type inequality, Advances in Difference Equations, 2017 (2017), Paper No. 313, 11 pp. doi: 10.1186/s13662-017-1285-0. Google Scholar
T. Abdeljawad, A Lyapunov type inequality for fractional operators with nonsingular Mittag-Leffler kernel, Journal of Inequalities and Applications, 2017 (2017), Paper No. 130, 11 pp. doi: 10.1186/s13660-017-1400-5. Google Scholar
T. Abdeljawad and Q. M. Al-Mdallal, Discrete Mittag-Leffler kernel type fractional difference initial value problems and Gronwall's inequality, Journal of Computational and Applied Mathematics, 339 (2018), 218-230. doi: 10.1016/j.cam.2017.10.021. Google Scholar
T. Abdeljawad and D. Baleanu, Discrete fractional differences with nonsingular discrete Mittag-Leffler kernels, Advances in Difference Equations, 2016 (2016), Paper No. 232, 18 pp. doi: 10.1186/s13662-016-0949-5. Google Scholar
T. Abdeljawad and D. Baleanu, Integration by parts and its applications of a new nonlocal fractional derivative with Mittag-Leffler nonsingular kernel, Journal of Nonlinear Sciences and Applications, 10 (2017), 1098-1107. doi: 10.22436/jnsa.010.03.20. Google Scholar
T. Abdeljawad and D. Baleanu, Monotonicity analysis of a nabla discrete fractional operator with discrete Mittag-Leffler kernel, Chaos, Solitons & Fractals, 102 (2017), 106-110. doi: 10.1016/j.chaos.2017.04.006. Google Scholar
T. Abdeljawad and D. Baleanu, Monotonicity results for fractional difference operators with discrete exponential kernels, Advances in Difference Equations, 2017 (2017), Paper No. 78, 9 pp. doi: 10.1186/s13662-017-1126-1. Google Scholar
T. Abdeljawad and D. Baleanu, On fractional derivatives with exponential kernel and their discrete versions, Reports on Mathematical Physics, 80 (2017), 11-27. doi: 10.1016/S0034-4877(17)30059-9. Google Scholar
T. Abdeljawad and F. Madjidi, Lyapunov-type inequalities for fractional difference operators with discrete Mittag-Leffler kernel of order 2 < α < 5/2, The European Physical Journal Special Topics, 226 (2017), 3355-3368. Google Scholar
O. Agrawal, General formulation for the numerical solution of optimal control problems, International Journal of Control, 50 (1989), 627-638. doi: 10.1080/00207178908953385. Google Scholar
M. Al-Refai and T. Abdeljawad, Analysis of the fractional diffusion equations with fractional derivative of non-singular kernel, Advances in Difference Equations, 2017 (2017), Paper No. 315, 12 pp. doi: 10.1186/s13662-017-1356-2. Google Scholar
B. S. Alkahtani, O. J. Algahtani, R. S. Dubey and P. Goswam, The solution of modified fractional Bergman's minimal blood glucose-insulin model, Entropy, 19 (2017), 114. doi: 10.3390/e19050114. Google Scholar
A. Atangana and D. Baleanu, New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model, Thermal Science, 20 (2016), 763-769. Google Scholar
R. L. Bagley and P. Torvik, A theoretical basis for the application of fractional calculus to viscoelasticity, Journal of Rheology, 27 (1983), 201-210. doi: 10.1122/1.549724. Google Scholar
D. Baleanu, A. Jajarmi and M. Hajipour, A new formulation of the fractional optimal control problems involving Mittag–Leffler nonsingular kernel, Journal of Optimization Theory and Applications, 175 (2017), 718-737. doi: 10.1007/s10957-017-1186-0. Google Scholar
R. K. Biswas and S. Sen, Fractional optimal control problems with specified final time, Journal of Computational and Nonlinear Dynamics, 6 (2011), 021009. doi: 10.1115/1.4002508. Google Scholar
M. Caputo and M. Fabrizio, A new definition of fractional derivative without singular kernel, Progress in Fractional Differentiation and Applications, 1 (2015), 1-13. Google Scholar
M. Caputo and M. Fabrizio, Applications of new time and spatial fractional derivatives with exponential kernels, Progress in Fractional Differentiation and Applications, 2 (2016), 1-11. doi: 10.18576/pfda/020101. Google Scholar
S. Choi, E. Jung and S.-M. Lee, Optimal intervention strategy for prevention tuberculosis using a smoking–tuberculosis model, Journal of Theoretical Biology, 380 (2015), 256-270. doi: 10.1016/j.jtbi.2015.05.022. Google Scholar
G. M. Coclite, M. Garavello and L. V. Spinolo, Optimal strategies for a time-dependent harvesting problem, Discrete & Continuous Dynamical Systems-S, 11 (2018), 865-900. doi: 10.3934/dcdss.2018053. Google Scholar
E. F. Doungmo Goufo and S. Mugisha, On analysis of fractional Navier-Stokes equations via nonsingular solutions and approximation, Mathematical Problems in Engineering, 2015 (2015), Art. ID 212760, 8 pp. doi: 10.1155/2015/212760. Google Scholar
N. Ejlali and S. M. Hosseini, A pseudospectral method for fractional optimal control problems, Journal of Optimization Theory and Applications, 174 (2017), 83-107. doi: 10.1007/s10957-016-0936-8. Google Scholar
M. Enelund and P. Olsson, Damping described by fading memory–analysis and application to fractional derivative models, International Journal of Solids and Structures, 36 (1999), 939-970. doi: 10.1016/S0020-7683(97)00339-9. Google Scholar
J. Fujioka, A. Espinosa, R. F. Rodríguez and B. A. Malomed, Radiating subdispersive fractional optical solitons, Chaos, 24 (2014), 033121, 11pp. doi: 10.1063/1.4892616. Google Scholar
D.-p. Gao and N.-j. Huang, Optimal control analysis of a tuberculosis model, Applied Mathematical Modelling, 58 (2018), 47-64. doi: 10.1016/j.apm.2017.12.027. Google Scholar
R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, 2000. doi: 10.1142/9789812817747. Google Scholar
J. Hristov, Derivation of fractional Dodson's equation and beyond: Transient mass diffusion with a non-singular memory and exponentially fading–out diffusivity, Progress in Fractional Differentiation and Applications, 3 (2017), 255-270. Google Scholar
J. Hristov, Transient heat diffusion with a non-singular fading memory: From the Cattaneo constitutive equation with Jeffrey's kernel to the Caputo–Fabrizio time-fractional derivative, Thermal Science, 20 (2016), 757-762. Google Scholar
J. Hristov, Derivatives with non-singular kernels from the caputo–fabrizio definition and beyond: Appraising analysis with emphasis on diffusion models, Frontiers in Fractional Calculus. Sharjah: Bentham Science Publishers, 1 (2018), 269-341. Google Scholar
C. Ionescu, K. Desager and R. De Keyser, Fractional order model parameters for the respiratory input impedance in healthy and in asthmatic children, Computer Methods and Programs in Biomedicine, 101 (2011), 315-323. doi: 10.1016/j.cmpb.2010.11.010. Google Scholar
C. M. Ionescu and R. De Keyser, Relations between fractional-order model parameters and lung pathology in chronic obstructive pulmonary disease, IEEE Transactions on Biomedical Engineering, 56 (2009), 978-987. doi: 10.1109/TBME.2008.2004966. Google Scholar
S. Jahanshahi and D. F. Torres, A simple accurate method for solving fractional variational and optimal control problems, Journal of Optimization Theory and Applications, 174 (2017), 156-175. doi: 10.1007/s10957-016-0884-3. Google Scholar
F. Jarad, T. Abdeljawad and D. Baleanu, Higher order fractional variational optimal control problems with delayed arguments, Applied Mathematics and Computation, 218 (2012), 9234-9240. doi: 10.1016/j.amc.2012.02.080. Google Scholar
Z. D. Jelicic and N. Petrovacki, Optimality conditions and a solution scheme for fractional optimal control problems, Struct. Multidiscip. Optim., 38 (2009), 571–581, URL http://dx.doi.org/10.1007/s00158-008-0307-7. doi: 10.1007/s00158-008-0307-7. Google Scholar
T. Kaczorek, Reachability of fractional continuous-time linear systems using the Caputo–Fabrizio derivative, in ECMS, 2016, 53–58. doi: 10.7148/2016-0053. Google Scholar
T. Kaczorek and K. Borawski, Fractional descriptor continuous–time linear systems described by the Caputo–Fabrizio derivative, International Journal of Applied Mathematics and Computer Science, 26 (2016), 533-541. doi: 10.1515/amcs-2016-0037. Google Scholar
C. K. Kwuimy, G. Litak and C. Nataraj, Nonlinear analysis of energy harvesting systems with fractional order physical properties, Nonlinear Dynamics, 80 (2015), 491-501. doi: 10.1007/s11071-014-1883-2. Google Scholar
C. Li and F. Zeng, The finite difference methods for fractional ordinary differential equations, Numer. Funct. Anal. Optim., 34 (2013), 149-179. doi: 10.1080/01630563.2012.706673. Google Scholar
C. Li and F. Zeng, Numerical Methods for Fractional Calculus, vol. 24, CRC Press, 2015. Google Scholar
J. Liouville, Mémoire: Sur quelques questions de géométrie et de mécanique, et sur un nouveau genre de calcul pour résoudre ces questions, J l'Ecole Polytéch, 13 (1832), 1-66. Google Scholar
A. Lotfi, M. Dehghan and S. A. Yousefi, A numerical technique for solving fractional optimal control problems, Computers & Mathematics with Applications, 62 (2011), 1055-1067. doi: 10.1016/j.camwa.2011.03.044. Google Scholar
R. L. Magin, Fractional Calculus in Bioengineering, Begell House Redding, 2006.Google Scholar
J. P. Mateus, P. Rebelo, S. Rosa, C. M. Silva and D. F. Torres, Optimal control of non-autonomous SEIRS models with vaccination and treatment, Discrete & Continuous Dynamical Systems-S, 11 (2018), 1179-1199. doi: 10.3934/dcdss.2018067. Google Scholar
V. Morales-Delgado, J. Gómez-Aguilar and M. Taneco-Hernandez, Analytical solutions for the motion of a charged particle in electric and magnetic fields via non-singular fractional derivatives, The European Physical Journal Plus, 132 (2017), 527. doi: 10.1140/epjp/i2017-11798-7. Google Scholar
A. Nemati and S. A. Yousefi, A numerical scheme for solving two-dimensional fractional optimal control problems by the Ritz method combined with fractional operational matrix, IMA Journal of Mathematical Control and Information, 34 (2017), 1079-1097. doi: 10.1093/imamci/dnw009. Google Scholar
T. Ohtsuka, K. Shirakawa and N. Yamazaki, Optimal control problem for Allen–Cahn type equation associated with total variation energy, Discrete Contin. Dyn. Syst. Ser. S, 5 (2012), 159-181. doi: 10.3934/dcdss.2012.5.159. Google Scholar
I. Petráš and R. L. Magin, Simulation of drug uptake in a two compartmental fractional model for a biological system, Communications in Nonlinear Science and Numerical Simulation, 16 (2011), 4588-4595. Google Scholar
I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, Inc., San Diego, CA, 1999, An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Google Scholar
S. Pooseh, R. Almeida and D. F. Torres, Fractional order optimal control problems with free terminal time, Journal of Industrial & Management Optimization, 10 (2014), 363-381. doi: 10.3934/jimo.2014.10.363. Google Scholar
J. K. Popović, M. T. Atanacković, A. S. Pilipović, M. R. Rapaić, S. Pilipović and T. M. Atanacković, A new approach to the compartmental analysis in pharmacokinetics: fractional time evolution of diclofenac, Journal of Pharmacokinetics and Pharmacodynamics, 37 (2010), 119-134. Google Scholar
S. G. Samko, A. A. Kilbas, O. I. Marichev et al., Fractional Integrals and Derivatives, vol. 1993, 1993. Google Scholar
N. A. Sheikh, F. Ali, M. Saqib, I. Khan, S. A. A. Jan, A. S. Alshomrani and M. S. Alghamdi, Comparison and analysis of the Atangana–Baleanu and Caputo–Fabrizio fractional derivatives for generalized Casson fluid model with heat generation and chemical reaction, Results in Physics, 7 (2017), 789-800. doi: 10.1016/j.rinp.2017.01.025. Google Scholar
A. A. Tateishi, H. V. Ribeiro and E. K. Lenzi, The role of fractional time-derivative operators on anomalous diffusion, Frontiers in Physicss, 5 (2017), 52. Google Scholar
D. Verotta, Fractional dynamics pharmacokinetics-pharmacodynamic models, Journal of Pharmacokinetics and Pharmacodynamics, 37 (2010), 257-276. doi: 10.1007/s10928-010-9159-z. Google Scholar
J. Wang and Y. Zhou, A class of fractional evolution equations and optimal controls, Nonlinear Analysis: Real World Applications, 12 (2011), 262-272. doi: 10.1016/j.nonrwa.2010.06.013. Google Scholar
S. H. Weinberg, Membrane capacitive memory alters spiking in neurons described by the fractional-order Hodgkin-Huxley model, PloS One, 10 (2015), e0126629. doi: 10.1371/journal.pone.0126629. Google Scholar
D. Xue and L. Bai, Numerical algorithms for Caputo fractional-order differential equations, International Journal of Control, 90 (2017), 1201-1211. doi: 10.1080/00207179.2016.1158419. Google Scholar
A.-M. Yang, Y. Han, J. Li and W.-X. Liu, On steady heat flow problem involving Yang–Srivastava–Machado fractional derivative without singular kernel, Thermal Science, 20 (2016), 717-721. doi: 10.2298/TSCI16S3717Y. Google Scholar
X.-J. Yang, F. Gao, J. Machado and D. Baleanu, A new fractional derivative involving the normalized Sinc function without singular kernel, The European Physical Journal Special Topics, 226 (2017), 3567-3575. doi: 10.1140/epjst/e2018-00020-2. Google Scholar
J. Zhang, X. Ma and L. Li, Optimality conditions for fractional variational problems with Caputo–Fabrizio fractional derivatives, Advances in Difference Equations, 2017 (2017), Paper No. 357, 14 pp. doi: 10.1186/s13662-017-1388-7. Google Scholar
Y. Zhang, H. Sun, H. H. Stowell, M. Zayernouri and S. E. Hansen, A review of applications of fractional calculus in Earth system dynamics, Chaos, Solitons & Fractals, 102 (2017), 29-46. doi: 10.1016/j.chaos.2017.03.051. Google Scholar
Figure 1. Example 1: Comparative results of $u(t)$ and $u^*(t)$ for $M = 800$ and $\alpha = \{0.6, 0.7, 0.8, 0.9\}$
Figure 2. Example 1: Comparative results of $x(t)$ and $x^*(t)$ for $M = 800$ and $\alpha = \{0.6, 0.7, 0.8, 0.9\}$
Figure 3. Example 1: The absolute error plots for $x(t)$ (left) and $u(t)$ (right) with $M = 800$ and $\alpha = \{0.6, 0.7, 0.8, 0.9\}$
Figure 4. Example 2: Numerical results of $x(t)$ (left) and $u(t)$ (right)
Figure 5. Example 2: Numerical results of $x(t)$ for Caputo and CF derivatives
Figure 6. Example 3: Numerical results of $x_1(t)$, $x_2(t)$ and $u(t)$
Figure 7. Example 3: Numerical results of $x_1(t)$ for Caputo and CF derivatives
Table 1. Example 1: The values of J, absolute error, order of convergence and computational time (CT) for α = {0.6, 0.7}
$\alpha=0.6$ $\alpha=0.7$
$M$ $J$ $e_J$ $r_J$ CT $J$ $e_J$ $r_J$ CT
50 4.2795 0.0116 - 0.33 6.1951 0.0309 - 0.29
100 4.2818 0.0058 1.00 0.43 6.2030 0.0154 1.00 0.41
800 4.2857 0.00073 1.03 18.91 6.2140 0.0019 1.03 19.74
Table 2. Example 1: The values of $J$, absolute error, order of convergence and computational time (CT) for $\alpha = \{0.8, 0.9\}$.
50 9.9900 0.0863 - 0.31 20.2108 0.2844 - 0.27
100 10.0091 0.0432 0.99 0.38 20.1666 0.1425 0.99 0.40
800 10.0390 0.0054 1.00 18.85 20.2301 0.0178 1.00 19.18
Table 3. Example 1: The values of absolute error for $x(t)$ and the order of convergence for $\alpha = \{0.6, 0.7, 0.8, 0.9\}$
$\alpha=0.6$ $\alpha=0.7$ $\alpha=0.8$ $\alpha=0.9$
$M$ $e_x$ $r_x$ $e_x$ $r_x$ $e_x$ $r_x$ $e_x$ $r_x$
50 0.0090 - 0.0085 - 0.0146 - 0.0531 -
100 0.0045 1.00 0.0043 0.98 0.0072 1.01 0.0260 1.03
800 5.57e-04 0.98 5.34e-04 1.04 8.91e-04 1.01 0.0032 1.00
Table 4. Example 1: The values of absolute error for $u(t)$ and the order of convergence for $\alpha = \{0.6, 0.7, 0.8, 0.9\}$
$M$ $e_u$ $r_u$ $e_u$ $r_u$ $e_u$ $r_u$ $e_u$ $r_u$
Table 5. Example 2: The values of $J$ and computational time (CT)
$\alpha=0.7$ $\alpha=0.8$ $\alpha=0.9$ $\alpha=1$
$M$ $J$ CT $J$ CT $J$ CT $J$ CT
50 0.2048 0.08 0.1912 0.08 0.1817 0.08 0.1754 0.08
800 0.2039 17.17 0.1986 17.55 0.1986 16.96 0.2035 17.18
Table 6. Example 2: The comparative values of $J$ with $M = 800$
$J$
FD $\alpha=0.7$ $\alpha=0.8$ $\alpha=0.9$ $\alpha=1$
Caputo 0.2301 0.2073 0.2002 0.2035
CF 0.2039 0.1986 0.1986 0.2035
Table 7. Example 3: The values of $J$, rate of convergence and computational time (CT) for $\alpha = \{0.7, 0.8\}$
$N$ $J_{N}$ $J_N - J_{N/2}$ $\rho$ CT $J_{N}$ $J_N - J_{N/2}$ $\rho$ CT
50 12.1411 - - 0.63 13.8799 - - 0.64
100 7.9860 4.1551 - 1.32 8.6678 5.2121 - 1.25
400 6.1602 0.5012 1.40 17.18 6.3194 0.6351 1.43 16.97
800 5.9455 0.2147 1.22 146.48 6.0524 0.2670 1.25 144.69
Table 8. Example 3: The values of $J$, rate of convergence and computational time (CT) for $\alpha = \{0.9, 1\}$
$\alpha=0.9$ $\alpha=1$
100 9.9329 5.7287 - 1.39 12.3690 5.9843 - 1.73
Litan Yan, Xiuwei Yin. Optimal error estimates for fractional stochastic partial differential equation with fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 615-635. doi: 10.3934/dcdsb.2018199
Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 609-627. doi: 10.3934/dcdss.2020033
Konstantinos Chrysafinos, Efthimios N. Karatzas. Symmetric error estimates for discontinuous Galerkin approximations for an optimal control problem associated to semilinear parabolic PDE's. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1473-1506. doi: 10.3934/dcdsb.2012.17.1473
Xiaojie Wang. Weak error estimates of the exponential Euler scheme for semi-linear SPDEs without Malliavin calculus. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 481-497. doi: 10.3934/dcds.2016.36.481
Luca Di Persio, Giacomo Ziglio. Gaussian estimates on networks with applications to optimal control. Networks & Heterogeneous Media, 2011, 6 (2) : 279-296. doi: 10.3934/nhm.2011.6.279
Ahmad Ahmad Ali, Klaus Deckelnick, Michael Hinze. Error analysis for global minima of semilinear optimal control problems. Mathematical Control & Related Fields, 2018, 8 (1) : 195-215. doi: 10.3934/mcrf.2018009
Fulvia Confortola, Elisa Mastrogiacomo. Feedback optimal control for stochastic Volterra equations with completely monotone kernels. Mathematical Control & Related Fields, 2015, 5 (2) : 191-235. doi: 10.3934/mcrf.2015.5.191
Yufeng Shi, Tianxiao Wang, Jiongmin Yong. Optimal control problems of forward-backward stochastic Volterra integral equations. Mathematical Control & Related Fields, 2015, 5 (3) : 613-649. doi: 10.3934/mcrf.2015.5.613
Enkhbat Rentsen, J. Zhou, K. L. Teo. A global optimization approach to fractional optimal control. Journal of Industrial & Management Optimization, 2016, 12 (1) : 73-82. doi: 10.3934/jimo.2016.12.73
Hans Josef Pesch. Carathéodory's royal road of the calculus of variations: Missed exits to the maximum principle of optimal control theory. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 161-173. doi: 10.3934/naco.2013.3.161
Thomas Apel, Mariano Mateos, Johannes Pfefferer, Arnd Rösch. Error estimates for Dirichlet control problems in polygonal domains: Quasi-uniform meshes. Mathematical Control & Related Fields, 2018, 8 (1) : 217-245. doi: 10.3934/mcrf.2018010
Maria Alessandra Ragusa, Atsushi Tachikawa. Estimates of the derivatives of minimizers of a special class of variational integrals. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1411-1425. doi: 10.3934/dcds.2011.31.1411
Georg Vossen, Stefan Volkwein. Model reduction techniques with a-posteriori error analysis for linear-quadratic optimal control problems. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 465-485. doi: 10.3934/naco.2012.2.465
Omid S. Fard, Javad Soolaki, Delfim F. M. Torres. A necessary condition of Pontryagin type for fuzzy fractional optimal control problems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 59-76. doi: 10.3934/dcdss.2018004
Shakoor Pooseh, Ricardo Almeida, Delfim F. M. Torres. Fractional order optimal control problems with free terminal time. Journal of Industrial & Management Optimization, 2014, 10 (2) : 363-381. doi: 10.3934/jimo.2014.10.363
Michael Ruzhansky, Jens Wirth. Dispersive type estimates for fourier integrals and applications to hyperbolic systems. Conference Publications, 2011, 2011 (Special) : 1263-1270. doi: 10.3934/proc.2011.2011.1263
Benedict Geihe, Martin Rumpf. A posteriori error estimates for sequential laminates in shape optimization. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1377-1392. doi: 10.3934/dcdss.2016055
Selim Esedoḡlu, Fadil Santosa. Error estimates for a bar code reconstruction method. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1889-1902. doi: 10.3934/dcdsb.2012.17.1889
Philip Trautmann, Boris Vexler, Alexander Zlotnik. Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients. Mathematical Control & Related Fields, 2018, 8 (2) : 411-449. doi: 10.3934/mcrf.2018017
Shaoming Guo. Oscillatory integrals related to Carleson's theorem: fractional monomials. Communications on Pure & Applied Analysis, 2016, 15 (3) : 929-946. doi: 10.3934/cpaa.2016.15.929
Tuğba Akman Yıldız Amin Jajarmi Burak Yıldız Dumitru Baleanu | CommonCrawl |
Close ecological relationship among species facilitated horizontal transfer of retrotransposons
Xianzong Wang1 &
Xiaolin Liu1
Horizontal transfer (HT) of genetic materials is increasingly being found in both animals and plants and mainly concerns transposable elements (TEs). Many crustaceans have big genome sizes and are thus likely to harbor high TE contents. Their habitat might offer them ample opportunities to exchange genetic materials with organisms that are ecologically close but taxonomically distant to them.
In this study, we analyzed the transcriptome of Pacific white shrimp (Litopenaeus vannamei), an important economic crustacean, to explore traces of HT events. From a collection of newly assembled transcripts, we identified 395 high reliable TE transcripts, most of which were retrotransposon transcripts. One hundred fifty-seven of those transcripts showed highest similarity to sequences from non-arthropod organisms, including ray-finned fishes, mollusks and putative parasites. In total, 16 already known L. vannamei TE families are likely to be involved in horizontal transfer events. Phylogenetic analyses of 10 L. vannamei TE families and their homologues (protein sequences) revealed that L. vannamei TE families were generally more close to sequences from aquatic species. Furthermore, TEs from other aquatic species also tend to group together, although they are often distantly related in taxonomy. Sequences from parasites and microorganisms were also widely present, indicating their possible important roles in HT events. Expression profile analyses of transcripts in two NCBI BioProjects revealed that transcripts involved in HT events are likely to play important roles in antiviral immunity. More specifically, those transcripts might act as inhibitors of antiviral immunity.
Close ecological relationship, especially predation, might greatly facilitate HT events among aquatic species. This could be achieved through exchange of parasites and microorganisms, or through direct DNA flow. The occurrence of HT events may be largely incidental, but the effects could be beneficial for recipients.
Horizontal transfer (HT) of genetic materials between reproductively isolated species is an important mechanism in the evolution of prokaryotic genomes [1–3]. Recent studies showed that HT events are also widespread in animals and plants and mainly concern transposable elements (TEs) [4–12]. TEs are usually grouped into two distinct classes: class I elements (retrotransposons) and class II elements (DNA transposons) [13]. Retrotransposons, which integrate into new sites via a copy and paste mechanism, are often the major components in the genomes of many eukaryotic species, especially those with large genomes [14]. Retrotransposons constitute over 50 % of the genomes in many plants [15]. In mammals, LINE-1 (L1) retrotransposons' activity alone generated at least 20 % of the genome [16]. The horizontally transferred TEs are also mainly retrotransposons [6, 11]. However, unlike retroviruses, retrotransposons do not encode an envelope protein and hence require a vector between species to transpose horizontally. The vector discussed here is often thought to be parasites, which have ample opportunities to exchange genetic material with their hosts as the result of an intimate, long-term physical association [12]. In eukaryotes, the underlying mechanisms are largely unknown, but the proximity of species is almost indispensable in all HT events and may consequently increase the likelihood of HT. If HT also plays an important role in eukaryotic evolution, we may expect to find more evidence of HT events among species that are distantly related in taxonomy yet live in the same habitat.
The ancient crustaceans are a great model to investigate horizontal TE transfer (HTT) in eukaryotes. Many of them have big genome sizes and are thus likely to harbor high TE contents [17]. Decapod crustaceans, for instance, have genome sizes range from 1.05 Gb to 40 Gb (for human, the value is around 3 Gb). They have ample opportunities to intimately connect with fishes, mollusks and other animas that also inhabit in fresh or salty water. Furthermore, this connection is much less disturbed by geographical isolation when compared to land animals. Therefore, crustaceans may at least have some sequences that show higher similarity to other aquatic animals than land arthropods. However, one big drawback is that the whole genome sequencing projects of most crustaceans are not finished yet. Even though, next generation sequencing has made available more comprehensive transcriptome sequences for many crustaceans [18–20]. And HTTs detected in transcriptome are of particular importance: they are still active and may still have impact on genome evolution.
In this study, we particularly focused on Pacific white shrimp, Litopenaeus vannamei. This species has a genome size approximately 70 % of the human genome and is likely to harbor high TE content [21]. Due to its high commercial value, extensive efforts have been made on its transcriptomics to better understand its immunity, growth and development [18, 22]. We identified hundreds of reliable TE fragments from an up-to-date transcriptome assembly of L. vannamei and showed that many of them are involved in HTT events.
Overview of TE transcripts in L. vannamei transcriptome
We identified 395 TE transcripts in total, all of which have transposon-related conserved domains and their actual existence could be confirmed by sequence similarity search against whole collection of L. vannamei ESTs and nucleotides (mostly mRNA/cDNA). Furthermore, we ensured that they are not transcripts of single/low copy genes that happened to contain TE-related domains, e.g., the L. vannamei elongation factor 2 (EF2, GenBank ID: GU136230.1) mRNA contains a conserved domain that is a member of the TetM_like subfamily (NCBI CDD accession number: cd04168), which are typically found on mobile genetic elements. Of the 395 transcripts, 380 could be identified as transcripts of retrotransposons, 284 of which were further identified as Non-LTR retrotransposon transcripts (Table 1 and Additional file 1). The corresponding superfamilies of Non-LTR retrotransposon transcripts were also more diverse than LTR retrotransposon transcripts. Two hundred thirty transcripts could be identified as transcripts of already known L. vannamei TE families. It should be noted that two families, Gypsy-3_LVa-LTR and Penelope-6_LVa, were not consistent with their identified superfamilies. This is possibly the results of nested TEs (the insertion of TEs into pre-existing TEs), especially for the corresponding transcript of Gypsy-3_LVa-LTR, which contains a conserved RT-nLTR domain and consequently resulted in the identification of superfamily as RTE.
Table 1 Classification of 395 TE transcripts in L. vannamei transcriptome
L. vannamei TE transcripts showed high similarity to nucleotide sequences from distantly related aquatic species
By querying against NCBI BLAST Nucleotide database, we found that 244 transcripts had significant hits (E-value < 1e-5). The taxa of organisms present in top hits were extracted and counted. In total, 17 taxa were used to distinguish different species and evaluate their relationships. As shown is Table 2, arthropods were the most frequent top hits, followed by ray-finned fishes (actinopterygii) and mollusks. Species in cnidaria, nematoda and platyhelminthes, many of which are well known parasites, were also present in top hits with considerable number. Overall speaking, species from top hits represented a wide range of taxa, but most of them either also live in salty/fresh water or are potential parasites. Exceptions come from plants, mammals and birds; however, their frequencies as top hits are very low. It is noteworthy that as many as 30 transcripts showed high similarity to sequences from viruses. Further analysis revealed that they are actually all transcripts of Penelope-1_LVa, which contains fragments of white spot syndrome virus (WSSV). WSSV is one of the most fatal threats to shrimp farming throughout the globe [23, 24]; therefore, future studies on this TE family might afford novel perspective for antiviral research.
Table 2 Taxa of TE transcripts' top hits in querying against NCBI BLAST Nucleotide database
Most transcripts that match to arthropods in top place were transcripts of Non-LTR retrotransposons, especially the RTE superfamilies, while those match to ray-finned fishes and mollusks in top place were mainly transcripts of LTR retrotransposons. The overall transcripts of L. vannamei, however, are mainly arthropod conservative [18]. A simplest explanation for this phylogenetic incongruence is that transcripts which matched non-arthropod species in top place are involved in HTTs. Of 157 such transcripts, 83 could be identified as transcripts of already known L. vannamei TE families. There are 16 such TE families in total, which were used to query the NCBI BLAST chromosome and HTGS databases in order to find presence of their homologues in genomes of other species. As shown in Table 3, for query sequences that have significant hits, their top hits were also mostly from aquatic species. Yet it should be noted that query coverage was very low for every query sequence, making it impossible to get nucleotide homologues long enough for phylogenetic analyses. Furthermore, nearly half of the 16 TE families did not have significant hits. These suggest that the common ancestors of the 16 TE families and their homologues have diverged greatly among species. Consequently, the top hits of TE families may not be their nearest neighbor in phylogenies [25] and stronger evidences of HTT are needed.
Table 3 Top hits of 16 L. vannamei TE families in querying against chromosome and HTGS databases
Phylogenetic incongruence of TEs are closely linked with ecological relationships among species
To tackle the above problem, we used the protein sequences of 10 L. vannamei TE families (Table 4 and Additional file 2; the remaining six families do not have annotated protein sequences) to query the NCBI BLAST protein database, in order to find hits with higher query coverage (>60 %). Phylogenetic analysis using maximum likelihood method was conducted for each query sequence and its significant hits (E-value at 0). We used FastTree and RAxML (RAxML trees are provided in Additional file 3; Additional files 4, 5, 6, 7, 8, 9, 10 are FastTree trees) to infer phylogenetic trees [26, 27]. Both methods gave similar topologies around L. vannamei sequences. Only Nimb-2_LVa and RTE-3_LVa have different closest neighbors between the two methods. Of the 10 L. vannamei TE families, seven were most closely related to non-arthropod aquatic animals and only three were most closely related to insects (Nimb-1_LVa, Nimb-2_LVa and RTE-1_LVa). In addition to further confirming that many L. vannamei TE families are involved in HTT events, there are also more interesting details.
Table 4 Protein sequences of 10 L. vannamei TE families used for blastp search
For example in Fig. 1, many parasites were present in the tree, which indicate that parasitism might play important roles in HTT. Still, it may not be indispensable: in Fig. 2, there is no parasite at all; the close relationship between bees (Microplitis demolitor and Bombus terrestris) and mung bean (Vigna radiata var. radiata) could not be explained by parasitism (indicated by arrow 1 in Fig. 1), either. Aquatic animals tend to group together. However, many of them are actually very distant to each other in evolution (Table 5): purple sea urchin (Strongylocentrotus purpuratus) and bony fishes (indicated by arrow 2 in Fig. 1) have diverged for at least 600 million years [28, 29]; Saccoglossus kowalevskii, Pacific oyster (Crassostrea gigas), hydrozoans (Hydra vulgaris), stony corals (Acropora digitifera), sea anemones (Exaiptasia pallida) and Priapulus caudatus also represent a wide range of taxa (indicated by arrow 3 in Fig. 1). For TE families whose closest neighbors were arthropods, they also had relatively close neighbors of distantly related species (Fig. 3 and Additional files 4 and 5), indicating that their homologues might still involve in HTT events. Another point is that microorganisms were also widely present in trees (Additional files 4, 5 and 9). Actually, microorganisms are important donors of horizontally transferred materials found in animals [30]. Here, we conclude that microorganisms and parasites might play similar roles in HTT events: important, yet not indispensable.
Phylogenetic tree of BEL-1_LVa-I and its homologues. Local support values are only shown for those nodes with support values no less than 0.9. Organism names of respective sequences are colored according to their ecological habit or taxonomy; detailed information of the classified terms could be found in Table 5
Phylogenetic tree of Gypsy-14_LVa-I and its homologues
Table 5 Terms used to distinguish species
Phylogenetic tree of RTE-1_LVa and its homologues
Overall speaking, organisms with close ecological relationships tend to group together, even being distantly related in taxonomy. When referring to ecological relationships, we should not overlook the fact that L. vannamei and other aquatic species formed a huge food web in water. Therefore, predation among species might greatly facilitate HTTs, either through exchange of parasites and microorganisms, or through direct flow of DNA. After all, naked DNA and RNA can circulate in animal bodily fluids [31]. The huge amounts of TEs may also ensure their success of passing through a digestive system and other barriers.
It has been proposed that HTTs among plants might provide an escape route from silencing and elimination and are thus essential for TEs' survival in plants [6]. Yet on the other hand, the acquisition of foreign genes by horizontal transfer may enhance the evolutionary potential of the recipient lineage [12]. Although the expansions of TEs look like selfish and parasitic, TEs are actually important drivers of genome evolution: they can provide raw material for novel genes and contribute to regulation and generation of allelic diversity [14, 32, 33]. In this study, the frequent exchange of TEs between L. vannamei and other aquatic species may also provide some evolutionary advantages for them.
HTT involved transcripts might play important roles in antiviral immunity
To elucidate whether TEs, especially TEs involved in HTT events, have any biological functions, we analyzed the expression level of all transcripts in two NCBI BioProjects: (i) transcriptome of five early stages in L. vannamei, namely embryo, nauplius, zoe, mysis and postlarvae; and (ii) haemocyte transcriptome of L. vannamei after the successive stimulation of recombinant VP28. VP28 is known as one of the major envelope proteins of WSSV and is likely to play a key role in the initial steps of the systemic WSSV infection in shrimp [34]. As shown in Fig. 4, TE/HTT and overall transcripts showed different expression patterns in both BioProjects : in early developmental stages, the proportion of differentially expressed TE/HTT transcripts is generally lower than that of overall transcripts (Fig. 4a) ; while in response to VP28 stimulation, the proportion of differentially expressed TE/HTT transcripts is consistently higher than that of overall transcripts (Fig. 4b). Evidently, even TE/HTT transcripts may have some roles in early development, their effects would be diluted in overall transcripts; on the other hand, their possible roles in antiviral immunity are likely to be enriched. Using One-Class Support Vector Machines (SVM) models [35, 36], we predicted transcripts that showed similar expression pattern to HTT transcripts in both BioProjects. During early developmental stages, nine transcripts showed similar expression pattern to HTT transcripts; however, none of them have significant blastx hits (E-value < 1e-5), making it impossible to deduce their possible functions. Under VP28 stimulation, 34 transcripts showed similar expression pattern to HTT transcripts, of which seven have significant blastx hits with ascertained biological functions (Table 6). Transcripts listed in Table 6 (except the last one) are not likely to be direct immune genes, yet their fundamental roles must be indispensable in antiviral immunity (and in other biotic stresses) [37].
Expression profile of overall transcripts, TE transcripts and HTT transcripts. Raw sequencing reads of two NCBI BioProjects were aligned and counted: transcriptome of five early stages in L. vannamei (a) and haemocyte transcriptome of L. vannamei after the successive stimulation of recombinant VP28 (b). The threshold of differential expression represents the max fold change of transcript read counts among different experimental groups
Table 6 Transcripts that showed similar expression pattern to HTT transcripts under VP28 stimulation
The injection of VP28 into shrimp has been proved to increase their resistance to invasive WSSV [38]. GO enrichment analysis (BioProject: PRJNA233549) indicated that the successive VP28 stimulation could modulate cytoskeleton integration and redox to promote the phagocytosis activity of shrimp haemocytes [38]. Apart from up-regulation of antiviral genes, the down-regulation of some other functional genes may also be helpful. For example, the small GTP-binding protein Rab7 (GenBank ID: FJ811529.1) is a VP28-binding protein [39]. Injection of VP28 down-regulated the expression of Rab7 gene (Additional file 11), which is in accordance with previous finding that suppression of Rab7 inhibits WSSV (and also yellow head virus, YHV) infection in shrimp [40]. To elucidate more exact roles of TE/HTT transcripts, we further analyzed the expression level of overall/TE/HTT transcripts in different experimental groups: blank (no treatment), control (two injections of PBS buffer), single VP28 (one injection of PBS buffer and one injection of VP28) and successive VP28 (two injections of VP28) [38]. Two thresholds of differential expression were selected: at the threshold of 1, the whole collection of a transcript set (overall, TE or HTT) will be included; at the threshold of 6, it means the max fold change of any transcript among different experimental groups exceeds 6. At the threshold of 1, the mean values of expression levels varied, but no statistical significance (P < 0.05) was found in any transcript set. This is in accordance with the hypothesis that most genes are not differentially expressed [41] (Fig. 5). At the threshold of 6, on the other hand, the expression level of HTT transcripts in successive VP28 group was significantly lower than other groups (Fig. 6). Furthermore, at the threshold of 6, there are 39 HTT transcripts, seven of which contain fragments of WSSV (as described above in section 2 of Results and discussion, also see Additional file 12). Taken together, we suggest that the down-regulation of HTT transcripts in VP28 stimulation is not likely to be an incidental or side effect, but reflect their potential inhibitory roles in antiviral immunity.
Average read counts of transcripts in different experimental groups. The BioProject is the haemocyte transcriptome of L. vannamei after the successive stimulation of recombinant VP28. The threshold of differential expression is 1, which indicates that whole collection of overall transcripts (a), TE transcripts (b) and HTT transcripts (c) are included. Error bars represent SE; no significant difference exists between any two groups (P > 0.05, determined by one-way ANOVA)
Average read counts of differentially expressed transcripts in different experimental groups. The BioProject is also the haemocyte transcriptome of L. vannamei after the successive stimulation of recombinant VP28. The threshold of differential expression is 6, therefore around 11 % of overall transcripts (a), 20 % of TE transcripts (b) and 25 % of HTT transcripts (c) are included (as indicated in Fig. 4). Two asterisks represent very significant difference (**P < 0.01, determined by one-way ANOVA) between mean values of two groups, while one asterisk represents significant difference (*P < 0.05); error bars represent SE
Although the number of presumptive horizontally transferred genes is increasing, the exact role of HT/HTT in the evolution of unicellular eukaryotes is still blurry. Our knowledge about the underlying mechanism is even more limited. In this study, we found that in L. vannamei, an ancient crustacean, a considerable number of transcripts are also involved in HTT events. Nearly all of the HTT transcripts are transcripts of retrotransposons, which is in accordance with previous findings. Phylogenetic analyses revealed that L. vannamei TEs are often most close to TEs from aquatic species. Furthermore, TEs from other aquatic species, the taxonomic relationship among which are often very far away, also tend to group together. We suggest that HTT events might frequently occur among species that have close ecological relationships, the underlying impetus of which might be predation among those species. Through analyses of expression profile, we found that TE/HTT transcripts are more likely to play important roles in antiviral immunity, and they might actually act as inhibitors of antiviral immunity.
Identification of transcripts derived from TEs
A new transcriptome assembly of L. vannamei was downloaded from http://oaktrust.library.tamu.edu/handle/1969.1/152151, which contains 110,474 contigs with an N50 of 2701 bases [18]. Each assembled contig was viewed as a transcript, regardless of alternative transcripts that share the same precursors. To exclude artifacts [42] and possible contaminations in sampling, these transcripts were conducted local blastn search against whole collection of L. vannamei sequences downloaded from NCBI. Fifty-six thousand six hundred eight transcripts with higher similarities to already existed L. vannamei nucleotide sequences or expressed sequence tags (ESTs) were selected for further analysis (for more details, see Additional file 13). To isolate TE related transcripts, we conducted a local BLAST based two-step searching of similar domains/sequences. First, the fifty-six thousand six hundred transcripts were conducted blastx search against cdd_delta [43], which contains 26,482 conserved domain sequences downloaded from ftp://ftp.ncbi.nlm.nih.gov/blast/db/. 813 transcripts were identified as TE-related because each of them has at least one hit that is TE-related (has the character string 'transposon' in sequence description). Second, to exclude transcripts that are actually transcripts of single/low copy genes that happened to contain TE-related domain(s), two further sequence searches were conducted for the above 813 transcripts: (i) blastx again cdd_delta again, and (ii) tblastx against a database contains 45,725 repetitive sequences downloaded from Repbase Update (http://www.girinst.org/, relase 20.09) [44]. The criteria here were as follows: for a given query transcript, the E-value of the top hit in tblastx should be lower than 1e-5 and also lower than that in blastx top hit. Finally, 395 transcripts were identified as transcripts of TEs, with very high reliability.
Characterization of superfamilies and families of TE derived transcripts
The 395 TE derived transcripts were conducted tblastx to determine their superfamilies and blastn to determine their families. The database used here is the same as the one described above which contains 45,725 repetitive sequences. Briefly, a transcript was thought belonging to the same superfamily as its top hit in tblastx results; to determine its family classification, the top hit in blastn results should come from L. vannamei and meet an E-value cut-off at 1e-20. Therefore, 376 transcripts had their superfamilies determined while only 230 transcripts could be identified as transcripts of already known L. vannamei TE families. In total, 31 families were identified and only two were not consistent with identified superfamilies.
Evidence of HTTs and identification of L. vannamei TE families involved in HTTs
A Biopython [45] module, Bio.Blast.NCBIWWW, was used to query the NCBI BLAST Nucleotide (nt) database over the Internet using the 395 TE derived transcripts. All hits with E-value lower than 1e-5 were screened for their taxa. To effectively distinguish the organisms in the hits, 17 taxa were selected (as shown in Table 2). Their frequencies as top hit were counted. Since penaeidae shrimps are very close in evolution [46], they were excluded from the taxon arthropoda, that is to say hits from penaeidae family were filtered (mainly L. vannamei, Penaeus monodon and Marsupenaeus japonicus). Transcripts that showed highest sequence similarity to distantly related taxa, which meant the top hits were not from arthropods, were believed to be involved in HTTs. If the corresponding families of those transcripts were from L. vannamei, then they will be isolated. In total, 16 L. vannamei TE families were possibly involved in HTTs, representing 83 transcripts.
Presence of HTT-involved L. vannamei TE families' homologues in other species
The Bio.Blast.NCBIWWW module was also used for the 16 L. vannamei TE families to conducted homology search against the NCBI BLAST chromosome and HTGS (high throughput genomic sequences) databases, respectively. The threshold of E-value was set to be 1e-10. For a given TE family, its best hit in searching against the two databases were extracted, the taxon and organism of which was also screened as described above.
Phylogenetic analyses
Of the 16 L. vannamei TE families, 10 have coding regions (CDS) being annotated. Therefore, the longest protein sequence (in case there are more than one CDS) of each TE family was extracted and combined. The conserved domains within these protein sequences were predicated by the NCBI online tool CDD search (http://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi) and the results are displayed in Table 4. These protein sequences were used to conduct blastp search against NCBI BLAST Protein (nr) database. The threshold of E-value was set to be 1e-20; however, the actual E-value of all significant hits was 0. To remove redundancies, hits of one given query sequence were selected in the following way: hits with query length coverage less than 60 % were abandoned; the organisms of remaining hits were screened and only the top hit from the same organism was selected for further analyses (Additional file 14). The selected protein sequences were all downloaded from NCBI using Batch Entrez. All sequences, including queries, were aligned with MUSCLE [47]. We used FastTree [26] and RAxML [27] to construct phylogenetic trees from the multiple alignments (Additional file 15). FastTree trees were built using the defaulted JTT + CAT model and gamma approximation on substitution rates. RAxML trees were built using LG model (selected by automatic test of all models), gamma approximation on substitution rates and 100 bootstraps. Approximately unbiased (AU) tests of RAxML tree topologies were carried out using CONSEL [48].
Identification of differentially expressed transcripts
Raw sequencing data of two NCBI BioProjects, PRJNA253518 and PRJNA233549, were downloaded from NCBI ftp site (ftp://ftp.ncbi.nlm.nih.gov/) (Additional file 16). The project PRJNA253518 is transcriptome of five early stages in L. vannamei, namely embryo, nauplius, zoe, mysis and postlarvae. The project PRJNA233549 is haemocyte transcriptome of L. vannamei after the successive stimulation of recombinant VP28 [38]. To find transcripts differentially expressed in different circumstances, those fifty-six thousand six hundred transcripts were conducted alignments against reads from the two projects, using the Burrows-Wheeler Alignment tool (BWA, version 0.7.5a) [49]. The number of unambiguously matched reads to each transcript was counted using the HTSeq framework [50]. These counts were then normalized by edgeR [41, 51] for subsequent differential expression analysis. We set a range of values (1 to 40) as thresholds to indicate the degree of differential expression. Briefly, the read counts (represent expression levels) of one specific transcript in different experimental groups are usually different and should have a maximum count and a minimum count (if this is 0, then a pseudocount of one will be added). The max fold change of one transcript in a BioProject is calculated as below:
$$ \max \mathrm{fold}\;\mathrm{change} = \mathrm{maximum}\ \mathrm{count}/\mathrm{minimum}\ \mathrm{count} $$
Naturally, at the threshold of 1, all transcripts will be included; while at the threshold of 10, only 20 % or fewer transcripts will be included (see Fig. 4).
To predict transcripts of functional genes (other than TEs) that showed similar expression pattern to HTT transcripts, we developed One-Class SVM models [35] implemented in Scikt-learn [36], a Python module for machine learning. The defaulted RBF kernel was chosen. HTT transcripts with max fold change above four (in order to get more than 50 samples) in either BioProject were selected as training data. Transcripts that predicted to be positive were collected and used to conduct blastx search against NCBI BLAST Protein (nr) database.
Zhaxybayeva O, Doolittle WF. Lateral gene transfer. Curr Biol. 2011;21(7):R242–6.
Ochman H, Lawrence JG, Groisman EA. Lateral gene transfer and the nature of bacterial innovation. Nature. 2000;405(6784):299–304.
Skippington E, Ragan MA. Lateral genetic transfer and the construction of genetic exchange communities. FEMS Microbiol Rev. 2011;35(5SI):707–35.
Chapman JA, Kirkness EF, Simakov O, Hampson SE, Mitros T, Weinmaier T, Rattei T, Balasubramanian PG, Borman J, Busam D, et al. The dynamic genome of Hydra. Nature. 2010;464(7288):592–6.
Danchin EGJ, Rosso MN, Vieira P, de Almeida-Engler J, Coutinho PM, Henrissat B, Abad P. Multiple lateral gene transfers and duplications have promoted plant parasitism ability in nematodes. P Natl Acad Sci USA. 2010;107(41):17651–6.
El Baidouri M, Carpentier MC, Cooke R, Gao D, Lasserre E, Llauro C, Mirouze M, Picault N, Jackson SA, Panaud O. Widespread and frequent horizontal transfers of transposable elements in plants. Genome Res. 2014;24(5):831–8.
Gladyshev EA, Meselson M, Arkhipova IR. Massive horizontal gene transfer in bdelloid rotifers. Science. 2008;320(5880):1210–3.
Graham LA, Lougheed SC, Ewart KV, Davies PL. Lateral Transfer of a Lectin-Like Antifreeze Protein Gene in Fishes. PLoS ONE. 2008;3(7):e2616.
Hotopp JCD, Clark ME, Oliveira DCSG, Foster JM, Fischer P, Munoz Torres MC, Giebel JD, Kumar N, Ishmael N, Wang S, et al. Widespread lateral gene transfer from intracellular bacteria to multicellular eukaryotes. Science. 2007;317(5845):1753–6.
Rot C, Goldfarb I, Ilan M, Huchon D. Putative cross-kingdom horizontal gene transfer in sponge (Porifera) mitochondria. BMC Evol Biol. 2006;6(71).
Walsh AM, Kortschak RD, Gardner MG, Bertozzi T, Adelson DL. Widespread horizontal transfer of retrotransposons. P Natl Acad Sci USA. 2013;110(3):1012–6.
Wijayawardena BK, Minchella DJ, DeWoody JA. Hosts, parasites, and horizontal gene transfer. Trends Parasitol. 2013;29(7):329–38.
Wicker T, Sabot F, Hua-Van A, Bennetzen JL, Capy P, Chalhoub B, Flavell A, Leroy P, Morgante M, Panaud O, et al. A unified classification system for eukaryotic transposable elements. Nat Rev Genet. 2007;8(12):973–82.
Kazazian HH. Mobile Elements: Drivers of Genome Evolution. Science. 2004;303(5664):1626–32.
Kumar A, Jeffrey B. Plant retrotransposons. Annu Rev Genet. 1999;33:479–532.
Boissinot S, Chevret P, Furano AV. L1 (LINE-1) retrotransposon evolution and amplification in recent human history. Mol Biol Evol. 2000;17(6):915–28.
Piednoël M, Donnart T, Esnault C, Graça P, Higuet D, Bonnivard E. LTR-Retrotransposons in R. exoculata and Other Crustaceans: The Outstanding Success of GalEa-Like Copia Elements. PLoS ONE. 2013;8(3):e57675.
Ghaffari N, Sanchez-Flores A, Doan R, Garcia-Orozco KD, Chen PL, Ochoa-Leyva A, Lopez-Zavala AA, Carrasco JS, Hong C, Brieba LG, et al. Novel transcriptome assembly and improved annotation of the whiteleg shrimp (Litopenaeus vannamei), a dominant crustacean in global seafood mariculture. Sci Rep-UK. 2014;4:7081.
Li J, Li J, Chen P, Liu P, He Y. Transcriptome analysis of eyestalk and hemocytes in the ridgetail white prawn Exopalaemon carinicauda: assembly, Annotation and Marker Discovery. Mol Biol Rep. 2015;42(1):135–47.
Shen H, Hu Y, Ma Y, Zhou X, Xu Z, Shui Y, Li C, Xu P, Sun X. In-Depth Transcriptome Analysis of the Red Swamp Crayfish Procambarus clarkii. PLoS ONE. 2014;9(10):e110548.
Chow S, Dougherty WJ, Sandifer PA. Meiotic chromosome complements and nuclear DNA contents of four species of shrimps of the genus Penaeus. J Crustacean Biol. 1990;10(1):29–36.
Sookruksawong S, Sun F, Liu Z, Tassanakajon A. RNA-Seq analysis reveals genes associated with resistance to Taura syndrome virus (TSV) in the Pacific white shrimp Litopenaeus vannamei. Dev Comp Immunol. 2013;41(4):523–33.
Pradeep B, Shekar M, Karunasagar I, Karunasagar I. Characterization of variable genomic regions of Indian white spot syndrome virus. Virology. 2008;376(1):24–30.
Thitamadee S, Prachumwat A, Srisala J, Jaroenlak P, Salachan PV, Sritunyalucksana K, Flegel TW, Itsathitphaisarn O. Review of current disease threats for cultivated penaeid shrimp in Asia. Aquaculture. 2016;452:69–87.
Koski LB, Golding GB. The Closest BLAST Hit Is Often Not the Nearest Neighbor. J Mol Evol. 2001;52(6):540–2.
Price MN, Dehal PS, Arkin AP. FastTree 2—Approximately Maximum-Likelihood Trees for Large Alignments. PLoS ONE. 2010;5(3):e9490.
Stamatakis A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014;30(9):1312–3.
Hedges SB, Marin J, Suleski M, Paymer M, Kumar S. Tree of Life Reveals Clock-Like Speciation and Diversification. Mol Biol Evol. 2015;32(4):835–45.
Peterson KJ, Cotton JA, Gehling JG, Pisani D. The Ediacaran emergence of bilaterians: congruence between the genetic and the geological fossil records. Philo Trans R Soc B Biol Sci. 2008;363(1496):1435–43.
Boto L. Horizontal gene transfer in the acquisition of novel traits by metazoans. P Roy Soc B-Biol Sci. 2014;281(20132450).
Stroun M, Lyautey J, Lederrey C, Mulcahy HE, Anker P. Alu repeat sequences are present in increased proportions compared to a unique gene in plasma/serum DNA: evidence for a preferential release from viable cells? Ann NY Acad Sci. 2001;945:258–64.
Abrusan G, Szilagyi A, Zhang Y, Papp B. Turning gold into 'junk': transposable elements utilize central proteins of cellular networks. Nucleic Acids Res. 2013;41(5):3190–200.
Nefedova LN, Kuzmin IV, Makhnovskii PA, Kim AI. Domesticated retroviral GAG gene in Drosophila: New functions for an old gene. Virology. 2014;450–451:196–204.
van Hulten MCW, Witteveldt J, Snippe M, Vlak JM. White spot syndrome virus envelope protein VP28 is involved in the systemic infection of shrimp. Virology. 2001;285(2):228–33.
Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC. Estimating the support of a high-dimensional distribution. Neural Comput. 2001;13(7):1443–71.
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, et al. Scikit-learn: Machine Learning in Python. J Mach Learn Res. 2011;12:2825–30.
Liu H, Söderhäll K, Jiravanichpaisal P. Antiviral immunity in crustaceans. Fish Shellfish Immunol. 2009;27(2):79–88.
Wang L, Sun X, Zhou Z, Zhang T, Yi Q, Liu R, Wang M, Song L. The promotion of cytoskeleton integration and redox in the haemocyte of shrimp Litopenaeus vannamei after the successive stimulation of recombinant VP28. Dev Comp Immunol. 2014;45(1):123–32.
Sritunyalucksana K, Wannapapho W, Lo CF, Flegel TW. PmRab7 is a VP28-binding protein involved in white spot syndrome virus infection in shrimp. J Virol. 2006;80(21):10734–42.
Ongvarrasopone C, Chanasakulniyom M, Sritunyalucksana K, Panyim S. Suppression of PmRab7 by dsRNA inhibits WSSV or YHV infection in shrimp. Mar Biotechnol. 2008;10(4):374–81.
Dillies MA, Rau A, Aubert J, Hennequet-Antier C, Jeanmougin M, Servant N, Keime C, Marot G, Castel D, Estelle J, et al. A comprehensive evaluation of normalization methods for Illumina high-throughput RNA sequencing data analysis. Brief Bioinform. 2013;14(6):671–83.
Birney E. Assemblies: the good, the bad, the ugly. Nat Methods. 2011;8(1):59–60.
Marchler-Bauer A, Derbyshire MK, Gonzales NR, Lu S, Chitsaz F, Geer LY, Geer RC, He J, Gwadz M, Hurwitz DI, et al. CDD: NCBI's conserved domain database. Nucleic Acids Res. 2015;43(D1):D222–6.
Bao W, Kojima KK, Kohany O. Repbase Update, a database of repetitive elements in eukaryotic genomes. Mobile DNA-UK. 2015;6(11):11.
Cock PJA, Antao T, Chang JT, Chapman BA, Cox CJ, Dalke A, Friedberg I, Hamelryck T, Kauff F, Wilczynski B, et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics. 2009;25(11):1422–3.
Ma KY, Chan TY, Chu KH. Phylogeny of penaeoid shrimps (Decapoda: Penaeoidea) inferred from nuclear protein-coding genes. Mol Phylogenet Evol. 2009;53(1):45–55.
Edgar RC. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004;32(5):1792–7.
Shimodaira H, Hasegawa M. CONSEL: for assessing the confidence of phylogenetic tree selection. Bioinformatics. 2001;17(12):1246–7.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25(14):1754–60.
Anders S, Pyl PT, Huber W. HTSeq—a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015;31(2):166–9.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2009;26(1):139–40.
We are thankful for constructive comments provided by anonymous reviewers.
This work was supported by the Agricultural Science and Technology Achievement Transformation Fund Project of Ministry of Science and Technology of the People's Republic of China (No. 2012GB2E200361), the Northwest A&F University Experimental Demonstration Station (Base) and Innovation of Science and Technology Achievement Transformation Project (No. XNY2013-4), the Open Fund of Key Laboratory of Experimental Marine Biology, Chinese Academy of Sciences (No. KF2015No11) and the Overall Plan of Scientific and Technical Innovation Projects of Shaanxi Province (No. 2015KTTSNY01-01).
All data generated or analyzed during this study are included in this published article and its supplementary information files.
XW carried out the collection and analysis of data, wrote Python scripts and wrote the manuscript; XL participated in the design of the study. Both authors read and approve the final manuscript.
Shaanxi Key Laboratory of Molecular Biology for Agriculture, College of Animal Science and Technology, Northwest A&F University, Yangling, 712100, Shaanxi, People's Republic of China
Xianzong Wang & Xiaolin Liu
Xianzong Wang
Xiaolin Liu
Correspondence to Xiaolin Liu.
Detailed information of identified TE transcripts. (DOCX 72.2 kb)
Longest protein sequences of 10 L. vannamei TE families. (FASTA 10 kb)
Phylogenetic trees built by RAxML. (DOCX 3871 kb)
Phylogenetic tree of Nimb-1_LVa and its homologues. (PNG 2283 kb)
Phylogenetic tree of Penelope-1_LVa and its homologues. (PNG 560 kb)
Phylogenetic tree of RTE-2_LVa and its homologues. (PNG 2080 kb)
Additional file 10:
Expression change of Rab7 gene in response to VP28 stimulation. (DOCX 16 kb)
HTT transcripts' expression change in two BioProjects. (XLSX 24 kb)
Supplementary methods and source codes. (DOCX 72 kb)
GenBank accession numbers of protein sequences used for phylogenetic analyses. (XLSX 20 kb)
Multiple sequence alignments used for phylogenetic analyses. (ZIP 546 kb)
Detailed information of two NCBI BioProjects, PRJNA253518 and PRJNA233549. (XLSX 29 kb)
Wang, X., Liu, X. Close ecological relationship among species facilitated horizontal transfer of retrotransposons. BMC Evol Biol 16, 201 (2016). https://doi.org/10.1186/s12862-016-0767-0
Horizontal transfer
Retrotransposon
Ecological relationship | CommonCrawl |
Nitrogen Fixation and Hydrogen Metabolism in Cyanobacteria
Hermann Bothe, Oliver Schmitz, M. Geoffrey Yates, William E. Newton
Hermann Bothe
Botanical Institute, The University of Cologne, D-50923 Cologne, Germany
For correspondence: [email protected]
Oliver Schmitz
Metanomics GmbH, Tegeler Weg 33, 10589 Berlin, Germany
M. Geoffrey Yates
Fir Trees, Kingston Ridge, Kingston, Lewes, Sussex BN7 3JU, England
William E. Newton
Department of Biochemistry, Virginia Polytechnic Institute & State University, Blacksburg, Virginia 24061
DOI: 10.1128/MMBR.00033-10
Summary: This review summarizes recent aspects of (di)nitrogen fixation and (di)hydrogen metabolism, with emphasis on cyanobacteria. These organisms possess several types of the enzyme complexes catalyzing N2 fixation and/or H2 formation or oxidation, namely, two Mo nitrogenases, a V nitrogenase, and two hydrogenases. The two cyanobacterial Ni hydrogenases are differentiated as either uptake or bidirectional hydrogenases. The different forms of both the nitrogenases and hydrogenases are encoded by different sets of genes, and their organization on the chromosome can vary from one cyanobacterium to another. Factors regulating the expression of these genes are emerging from recent studies. New ideas on the potential physiological and ecological roles of nitrogenases and hydrogenases are presented. There is a renewed interest in exploiting cyanobacteria in solar energy conversion programs to generate H2 as a source of combustible energy. To enhance the rates of H2 production, the emphasis perhaps needs not to be on more efficient hydrogenases and nitrogenases or on the transfer of foreign enzymes into cyanobacteria. A likely better strategy is to exploit the use of radiant solar energy by the photosynthetic electron transport system to enhance the rates of H2 formation and so improve the chances of utilizing cyanobacteria as a source for the generation of clean energy.
Biological (di)nitrogen fixation is catalyzed by the enzyme complex nitrogenase, where the formation of molecular hydrogen accompanies ammonia production according to equation 1: $$mathtex$$\[8\mathrm{H}^{{+}}{+}8e^{{-}}{+}\mathrm{N}_{2}{+}16\mathrm{MgATP}\ {\rightarrow}\ 2\mathrm{NH}_{3}{+}\mathrm{H}_{2}{+}16\mathrm{MgADP}{+}16\mathrm{P}_{\mathrm{i}}\]$$mathtex$$(1) Whereas H2 formation by nitrogenases is unidirectional, H2 production by some hydrogenases is reversible, as shown in equation 2: $$mathtex$$\[2\mathrm{H}^{{+}}{+}2e^{{-}}\ {\leftrightarrow}\ \mathrm{H}_{2}\]$$mathtex$$(2) N2 fixation and H2 formation are closely linked processes, as has been known at least since a publication by Phelps and Wilson in 1941 (39). Hydrogenase recycles the H2 produced in N2 fixation, thereby minimizing the loss of energy during nitrogenase catalysis. A rather simple scheme showing the relationship between pyruvate degradation, N2 fixation, and production and uptake of H2, as occur in strict anaerobes such as Clostridium pasteurianum or in the facultative anaerobe Klebsiella pneumoniae, is shown in Fig. 1. However, H2 can also be produced independently of N2 fixation, e.g., as an end product of fermentation, which can also take place in N2-fixing organisms.
A simple scheme showing the relationship between pyruvate degradation, ammonium and hydrogen formation by nitrogenase, and hydrogen uptake by hydrogenase. This pathway is typical in strict or facultative anaerobes but also proceeds in cyanobacteria.
As described in detail below, nitrogenases (Mo, V, and homocitrate) and hydrogenases (Ni, CO, and CN−) contain unusual components in their prosthetic groups (Fig. 2 and 3) that are not or only rarely employed elsewhere in nature. Their roles and their biosyntheses pose fascinating questions that are as yet only partly resolved. Most cyanobacteria are aerobic organisms producing O2 photosynthetically. They are generally not exposed to environmental molecular H2. Despite this, and paradoxical at first glance, the capability to metabolize H2 is constitutively expressed in many aerobic cyanobacteria. N2 fixation and H2 metabolism have been key research areas in microbiology over the years. Cyanobacteria are the best suited organisms for studies on the subject, because several of them, both unicellular and heterocystous forms, can be easily genetically modified by molecular techniques. Moreover, cyanobacterial H2 production offers perspectives for potential applications.
The structure of the 2:1 Fe protein-MoFe protein complex of the Azotobacter vinelandii nitrogenase stabilized by MgADP plus AlF4−. Each Fe protein molecule (shown at the top left and bottom right of the complex in brown) docks directly over the interface between an α/β subunit pair of the MoFe protein (in black and gray), which occupies the center of the structure, to juxtapose its [4Fe-4S] cluster (in yellow) with a P cluster (in red) at this interface. One FeMo cofactor (in pale blue) is accommodated within each α subunit. The two β subunits (in gray) provide the interactions among the two α/β subunit pairs (183) (Protein Data Bank [PDB] code 1N2C). (Adapted from reference 183 with permission from Macmillan Publishers Ltd.)
The structure of the FeMo cofactor of the Azotobacter vinelandii nitrogenase MoFe protein with its α subunit-based ligating amino acid residues (αCys-275 and αHis-442) and homocitrate. The Mo (red), Fe (gray), and S (pale green) atoms are individually colored. The identity of the central atom (blue) remains unassigned (PDB code 1M1N). (Reprinted from reference 61 with permission from AAAS.)
Both N2 fixation (153, 177) and H2 metabolism (226, 228) have been reviewed. Excellent accounts on cyanobacterial hydrogenases (82, 212, 214) are available, and those articles should be consulted for primary references. The aim of this review is not to reiterate these subjects but to highlight facts and ideas, particularly on the physiology, that have not received much attention in the past. This review also emphasizes the more recent developments and focuses on the fact that nitrogenases and hydrogenases are common players in H2 metabolism. The restriction to cyanobacteria as the best candidates for applications appears to be timely.
MOLYBDENUM NITROGENASE
The longest-known and best-studied nitrogenase is the Mo nitrogenase, which occurs in all N2-fixing organisms with the exception of some CO-oxidizing bacteria (178). The Mo nitrogenase is encoded by the structural genes nifHDK. It consists of two component proteins. Figure 2 shows the structure of a 2:1 complex of the two components, which might approximate an electron transfer transition state, with the larger component in the center and one molecule of the smaller component at each end (see the legend to Fig. 2 for more information). The nifH gene codes for the smaller, homodimeric (γ2) protein, which has a molecular mass of about 64 kDa and is termed Fe protein, (di)nitrogenase reductase, or protein 2. Its prosthetic group is a [4Fe-4S] cluster that bridges the subunit interface and is ligated by two cysteinyl residues from each subunit. This cluster accepts reducing equivalents from electron carriers which are either ferredoxin or flavodoxin, depending on the organism. Each subunit possesses a MgATP/MgADP binding site. When provided with MgATP and reductant, the Fe protein undergoes a conformation change combined with a change of its redox potential of ca. −200 mV. Docking to the larger component protein (Fig. 2) lowers the redox potential further to about −600 mV and is accompanied by an additional conformation change. All these changes are prerequisites for the transfer of one electron from the Fe protein to the larger component protein with concurrent MgATP hydrolysis. Multiple electron transfers prepare the larger component for substrate binding and reduction. The Fe protein has the most conserved amino acid sequence among all nitrogenase proteins. Therefore, the nifH gene is best suited for DNA probing when searches for the occurrence of nitrogenase in organisms or different environments are undertaken (181).
The larger component protein (MoFe protein, dinitrogenase, or protein 1) is a tetrameric (α2β2) protein of about 240 kDa. It contains two unique prosthetic groups, the P cluster and the MoFe cofactor (Fig. 3). Each αβ dimer of the larger nitrogenase protein binds one FeMo cofactor and one P cluster. The P cluster is composed of both a [4Fe-4S] subcluster and a [4Fe-3S] subcluster, which share one S2−. It sits at the interface of the α and β subunits and is usually depicted as an intermediate in electron transfer from the Fe protein to the FeMo cofactor. However, there is no direct evidence to support this supposition. The P cluster may have an N2 fixation-specific role through which it provides the impetus to commit the reversibly bound N2 to the irreversible reduction pathway (70). The MoFe cofactor consists of 1 Mo atom, 7 Fe atoms, 9 S atoms, and homocitrate, plus an as-yet-unidentified light atom (or ion) at its center (Fig. 3). Although an educated first guess might be that it is N based, this suggestion remains unproven (see, for example, reference 234). The FeMo cofactor is the site of substrate binding and reduction. This cluster can again be subdivided into two subclusters, one [Mo-3Fe-3S] and one [4Fe-3S]. These are bridged by 3 S2− ligands and the light atom. Homocitrate, which is bound to Mo by two O ligands, is required for full catalytic activity, but its specific role remains unclear.
The substrate binding and reduction sites have not yet been identified definitively. The N2 molecule may be bound at a central 4Fe-4S face, possibly with participation of the light atom. The Mo-homocitrate entity would then not be directly involved in catalysis but could determine the redox potential of the cofactor. Alternatively, N2 may be bound directly to and be reduced at the Mo-homocitrate part of the FeMo cofactor. It is somewhat surprising that this issue has not yet been resolved despite extensive research for many years. However, neither the FeMo cofactor nor any other part of the nitrogenase complex binds a substrate on its own. Substrate binding and reduction commence only when both nitrogenase component proteins plus MgATP and reductant are available.
Nitrogenase catalyzes the reduction of many substrates other than N2, nearly all of which have a complete or partial triple bond in common, e.g., HC≡N (hydrocyanic acid), R,C≡N (nitriles), RN≡C (isonitriles), N2O (nitrous oxide), N≡N—N− (azide), and HC≡CH (acetylene); the main exceptions are H+ and NO2−. Of particular interest is the reduction of C2H2 to C2H4. In contrast to carbon fixation research, where an easily manageable isotope (14C) is available, N2 fixation research suffers from the absence of a similar isotope of N. 13N is highly radioactive and very unstable, and because 15N is nonradioactive, its reduction can be determined only by the somewhat laborious technique of mass spectrometry. In contrast, the gases C2H2 and C2H4 can be easily and quickly separated and quantified with high accuracy by gas chromatography. Unless special questions (e.g., the determination of the ratio between C2H2 and N2 reduction) are to be resolved, nitrogenase activity is routinely assayed by the C2H2 reduction method despite the fact that the ratio between N2 fixation and C2H2 reduction is not always 3:1. The reduction of all nitrogenase substrates is inhibited by CO, with the exception of H+ conversion to H2 (see below).
The reduction of N2 but not that of all other nitrogenase substrates is accompanied by the evolution of one H2 molecule for each N2 molecule that is reduced (203) (see equation 1). This formation of H2 could represent an activation step that is uniquely required for N2 binding (196). In the absence of any other substrate, nitrogenase catalyzes an ATP-dependent reduction of H+. The relationship(s) between the binding of N2, the other substrates, and inhibitors such as CO is apparently very complex and at best only partly understood. The complexity of the situation is evidenced by the fact that N2 is a competitive inhibitor of C2H2 reduction but C2H2 is a noncompetitive inhibitor of N2 reduction (179).
In addition to the three structural genes nifHDK, nitrogenase expression requires altogether 20 genes in the enterobacterium Klebsiella pneumoniae, all of which are contiguously located on the chromosome. In other bacteria, these genes are interspersed throughout the genome, and other fix genes may be necessary for nitrogenase synthesis and catalysis.
ALTERNATIVE NITROGENASES
Mo nitrogenase is now known to have two close relatives, the V nitrogenase and the Fe nitrogenase, but the distribution of these two enzymes appears to be haphazard (see below). The discovery of the alternative nitrogenases without molybdenum in their prosthetic groups can be regarded as a milestone in nitrogenase research. Reviews on this subject are available (18, 59, 167, 242). The aerobe Azotobacter vinelandii possesses gene sets for all three different types of nitrogenases (Fig. 4). Under conditions of Mo sufficiency in the culture medium, A. vinelandii expresses nifHDK, encoding Mo nitrogenase. When Mo is limiting but V is sufficiently available, A. vinelandii synthesizes a V nitrogenase with a VFe cofactor in the N2 binding and reducing site through expression of the alternative structural genes vnfHDGK. The occurrence of V in the prosthetic group of an enzyme complex is remarkable because, other than in V nitrogenase, the element V has only rarely been found to have a biological function, e.g., in some uncommon peroxidases (95). When the concentrations of both Mo and V are growth limiting, A. vinelandii synthesizes a third nitrogenase with an FeFe cofactor in the active site and encoded by the structural genes anfHDGK.
Genes coding for nitrogenases in two cyanobacteria and two other microorganisms. (Courtesy of Teresa Thiel, University of Missouri—St. Louis.)
All three nitrogenases are rather similar. They require both a larger and a smaller component protein for catalytic activity and possess the P cluster, with identical spectroscopic properties, and a special cofactor for the substrate binding and reducing site. All three nitrogenases show extensive but not identical amino acid sequence homologies. Most importantly, both alternative nitrogenases possess the additional G gene located between the D and K genes and the resulting component proteins are, therefore, α2β2δ2 heterohexamers. The δ subunit has no counterpart with similar sequence homologies elsewhere. Its function has not been finally resolved, but it is apparently required for processing the apoprotein of the alternative nitrogenases to the functional enzyme complex by assisting in the insertion of the cofactor, as has been specifically shown for the V nitrogenase (45, 46). Remarkably, although the proteins VnfG and AnfG are required for N2 fixation by A. vinelandii, they are not required for C2H2 reduction (45, 46, 228).
Both alternative nitrogenases can support growth of A. vinelandii, albeit with lower rates than Mo nitrogenase. Both N2 and C2H2 are poorer substrates for the alternative nitrogenases than for the Mo enzyme. Whereas with Mo nitrogenase the stoichiometry between ammonia production and H2 formation is about 2:1, as shown in equation 1, the reaction via the V nitrogenase proceeds optimally as shown in equation 3: $$mathtex$$\[12\mathrm{H}^{{+}}{+}12e^{{-}}{+}\mathrm{N}_{2}{+}24\mathrm{MgATP}\ {\rightarrow}\ 2\mathrm{NH}_{3}{+}3\mathrm{H}_{2}{+}24\mathrm{MgADP}{+}24\mathrm{P}_{\mathrm{i}}\]$$mathtex$$(3) With Mo nitrogenase, virtually all electrons are allocated to C2H2 when it is the only substrate available. In contrast, C2H4 formation by V nitrogenase is accompanied by a significant production of H2. This H2 formation in the presence of either N2 or C2H2 seems to be even higher with the Fe nitrogenase, although these reactions have not been examined in comparable detail. These differences between the three nitrogenases are not due to differences in the apparent Km values for N2 and C2H2 and are also not caused by restricted electron transfer within or between the nitrogenase proteins (59). The differences may lay in the rate-limiting step in the nitrogenase catalytic cycle (220), which is the final dissociation of the oxidized Fe protein-MgADP from the electron transfer complex.
The production of NH3 from N2 by the V nitrogenase is accompanied by the release of the presumptive reduction intermediate N2H4 (57). In addition, both the V and Fe nitrogenases reduce C2H2 beyond C2H4 to produce some C2H6. Although this ethane formation amounts to only about 3% of the total C2H2-reducing capacity, it can easily be assessed by gas chromatography and is therefore indicative for the expression of an alternative nitrogenase in an organism (56). Mo nitrogenase does not catalyze the reduction of ethene, but some H2-consuming methanogenic enrichment cultures have been reported to produce ethane from ethene apparently independently of nitrogenases (118).
The apparently haphazard distribution of nitrogenases results in some organisms having all three, some possessing only Mo nitrogenase, and others having the Mo and V but not the Fe nitrogenase or the Mo and Fe nitrogenases without the V nitrogenase. Azotobacter vinelandii (19), Azotobacter paspali (129), Rhodopseudomonas palustris (155), and the archaeon Methanosarcina acetivorans (76) are the only organisms so far identified that possess gene sets for all three nitrogenases. The combination of a Mo and a V nitrogenase is found in Azotobacter chroococcum, Azotobacter salinestris, and the archaeon Methanosarcina barkeri 27 (129) and in several cyanobacteria (see below). The Mo and Fe nitrogenases but not the V enzyme occur in Clostridium pasteurianum, Azomonas macrocytogeneses, and Azospirillum brasilense Cd (44) and in the phototrophs Rhodospirillum rubrum, Rhodobacter capsulatus, and Heliobacterium gestii (17).
Probes have been developed from vnfG and anfG to specifically amplify gene segments by PCR and to detect the alternative nitrogenases in organisms. By this technique, Loveless et al. (130) were able to isolate seven diazotrophs from aquatic environments that possess an alternative nitrogenase(s) and belong to the fluorescent pseudomonads and azotobacteria of the gammaproteobacteria. Recently, 24 bacteria of the same group, one closely related to Enterobacter and another with sequences almost identical to those of Paenibacillus, were isolated from diverse habitats, all with an alternative nitrogenase(s) (17). Besides in pseudomonads and azotobacteria, alternative nitrogenases occur only occasionally and in prokaryotes of totally unrelated taxonomic affinities. The rather close sequence similarities of the nitrogenase genes suggest that they may have arisen by gene duplication in the azotobacter-fluorescent pseudomonad group (17). In other organisms, however, there is little correlation between vnfG and anfG sequences on the one hand and the phylogeny inferred from the 16S rRNA gene sequence data on the other. This could mean that alternative nitrogenase genes may have been interspersed by lateral gene transfer among nonmembers of the azotobacter-pseudomonad group.
An indicator of this situation seems to occur in Methanosarcina barkeri 227 (47). This archaeon possesses a D gene and a G gene with close sequence homologies to vnfDG from other organisms, particularly Anabaena variabilis. The vnfH gene is separated from the vnfDGK cluster by two open reading frames (ORFs). Phylogenetic analysis indicates that this H gene is a member of a separate cluster comprising anfH genes of several bacteria and is closely related to anfH from Rhodobacter capsulatus and Clostridium pasteurianum. This cluster might also include vnfH from A. vinelandii. In another methanogen, Methanococcus maripaludis, with only a single nitrogenase, nifD and nifK cluster with the other genes for the Mo nitrogenase, whereas the H gene is an amalgam of both Mo and V nitrogenase H genes (113). Thus, vnfH and vnfDGK may have been acquired from other organisms by two independent gene transfers.
Such processes are difficult to understand because there is no apparent selective pressure to acquire and maintain alternative nitrogenases. Conditions in nature where Mo is growth limiting in soils or aqueous habitats are unknown, and microorganisms have high-affinity transport systems that effectively mobilize Mo from habitats (168). These mobilizations may result in microzones of Mo depletion around microorganisms where bacteria that can express an alternative nitrogenase(s) have a selective advantage (142). However, the isolation of diazotrophs with alternative nitrogenases from habitats with sufficient Mo concentrations (17) may indicate that these enzymes could have other, but so far totally unresolved, functions in nature. Otherwise, why would these genes, if redundant, be retained in organisms during evolution?
NITROGENASES IN CYANOBACTERIA
Occurrence of Nitrogenases in HeterocystsCell-free preparations of nitrogenases from all organisms are irreversibly damaged by O2, and different groups of microorganisms have been versatile in developing various means to protect their nitrogenases against the O2 of the air (153). In cyanobacteria, the O2 problem is enhanced by the photosynthetic production of this gas. Many filamentous cyanobacteria solve the issue by cell differentiation. Under aerobic growth conditions, their vegetative cells perform photosynthetic O2 evolution and CO2 fixation, whereas nitrogenase resides in specialized cells, the heterocysts (66). These differentiate from vegetative cells by cell division and extensive metabolic changes (133, 162). Photosystem II (PSII) is largely degraded in heterocysts so that they cannot perform the photosynthetic water-splitting reaction. They are also unable to fix CO2 photosynthetically. Vegetative cells provide photosynthetically fixed carbon, which may be exported as sucrose to the heterocysts (52). In turn, heterocysts provide nitrogen, likely as glutamine formed via ammonia generated by N2 fixation and both glutamine synthetase and glutamate synthase (219). Alternatively, glutamine may be converted to arginine which is then incorporated into the cyanophycin granule. This may be degraded by cyanophycinase in a dynamic way depending on the N demand of heterocysts and vegetative cells (86).
Heterocysts possess a thick cell envelope composed of long-chain, densely packed glycolipids providing a barrier to gas exchange (9). The main diffusion pathway for O2 and N2 might be through the terminal pores ("microplasmodesmata") (83) that connect heterocysts with vegetative cells. Walsby (230) suggested that transmembrane proteins make the narrow pores permeable enough and might provide a means of regulating gas exchange. Residual O2 reaching the inside of the heterocysts might be immediately consumed by their high respiratory activity and also other reactions in these cells. In this way, heterocysts provide an anaerobic environment which allows nitrogenase to function.
The occurrence of nonspecific intercellular channels between heterocysts and vegetative cells has recently been confirmed (149). Any analogy to the plasmodesmata of higher plants is misleading, however, because cyanobacteria do not possess an endoplasmic reticulum. However, the export of metabolites might follow the source-sink gradient along the intercellular channels of both plants and cyanobacteria. Alternatively, the periplasmic space between the peptidoglycan layer and the outer membrane could constitute a communication conduit for the transfer of compounds, since this space is continuous between heterocysts and vegetative cells (72).
Heterocyst formation from vegetative cells of Anabaena species takes about 24 h after the cells have suffered N deprivation. More than 500 proteins are differentially expressed in heterocysts during cellular transformation from vegetative cells (162), showing that this complex process is under the control of many genes. Master regulators are HetR, a serine-type protease, and NtcA, a nitrogen control transcription factor in cyanobacteria (115, 152, 160, 200). Expression of hetR is upregulated by nitrogen deprivation, and this upregulation depends on NtcA (62). Heterocyst formation is also controlled by the availability of 2-oxoglutarate, which provides the carbon skeleton for the incorporation of inorganic nitrogen and which also serves as a signal molecule of the organic carbon content in the developing heterocysts (122, 161, 223). NtcA is the main 2-oxoglutarate sensor for the initiation of heterocyst differentiation (239). The otherwise important signal protein PII, which is involved in regulation of nitrogen metabolism in bacteria and plants, is apparently not required for heterocyst formation (240). Nitrogenase synthesis has a high demand for Fe. The uptake of this element is controlled by furA, whose expression is also modulated by NtcA and HetR (128). The reader is referred to review articles on this complex regulatory cascade (84, 94).
Before nitrogenase can be expressed in Anabaena sp. strain 7120, a gene rearrangement has to occur within nifD. An 11-kb DNA element is excised by a specific enzyme (XisA), and the two fragments of nifD are ligated to allow nitrogenase transcript formation to proceed. The excisase gene xisA is located on the excised DNA element. This gene rearrangement occurs in heterocystous cyanobacteria, such as the best-studied species Anabaena variabilis (37) and Anabaena (Nostoc) PCC 7120 (43), but not in nonheterocystous, N2-fixing forms (93). Similar rearrangements happen during the late stages of heterocyst development of some cyanobacteria. These include excision within a special ferredoxin (fdxN) of a 55-kb element by XisF and excision of a 10.5-kb element within the large subunit of uptake hydrogenase (hupL) (see below) mediated by XisC. These genetic elements may represent ancient viruses that have come under the control of the host and are excised as required. Similar gene rearrangements were detected during spore formation in bacteria. The subject has been reviewed (93), and newer publications on this subject are available (43, 96, 198).
Electron Transport to Nitrogenase in CyanobacteriaElectron transport to nitrogenase has been studied extensively in heterocystous cyanobacteria. Heterocysts have a very active ferredoxin- and photosystem I-dependent cyclic photophosphorylation (28) which generates the ATP for N2 fixation. These cells possess several ferredoxin-like Fe-S proteins. Of these, a special FdxH is expressed only in heterocysts and was proposed to serve as the electron carrier to nitrogenase. However, mutants with mutations in FdxH can still perform N2 fixation at a high rate (138), indicating that this protein can be replaced by others. Another ferredoxin-like protein, FdxB (PatB), is specifically expressed in heterocysts (107). Neither ferredoxin was identified in a quantitative proteomic investigation of heterocysts (163).
Reducing equivalents for the reduction of ferredoxins can be generated by several pathways (Fig. 5). In heterocysts, in the light, ferredoxin can be reduced via photosystem I. Alternatively, either NAD(P)H and a dehydrogenase or H2 and uptake hydrogenase (see below) can feed in electrons at the plastoquinone site (or close to it). In darkness, ferredoxin can be reduced by NAD(P)H and NAD(P)H:ferredoxin oxidoreductase (FNR) present in heterocysts and vegetative cells. The reduction of ferredoxin can also be achieved by the pyruvate phosphoroclastic reaction. Here, pyruvate and coenzyme A are cleaved to acetyl coenzyme A and CO2, and the remaining two electrons are transferred to ferredoxin. The enzyme involved, the pyruvate:ferredoxin oxidoreductase (PFO) is typically distributed among anaerobes, either strict (Clostridium) or facultative (Escherichia coli).
Generation of reductant for N2 fixation in cyanobacteria. The details are explained in the text.
A somewhat controversial issue arose regarding the occurrence of PFO in cyanobacteria. The enzyme was originally observed in extracts from Anabaena variabilis (120) and was then characterized in much greater detail from Anabaena cylindrica (151). Extracts from the latter cyanobacterium catalyzed the pyruvate-dependent reduction of methyl viologen (as an artificial substitute of ferredoxin) with formation of CO2 and the synthesis of acetohydroxamate from the acetyl coenzyme A produced. The reverse reaction, the synthesis of pyruvate from acetyl coenzyme A, CO2, and reduced ferredoxin, was also demonstrated. This reaction is even more indicative for the occurrence of the pyruvate:ferredoxin oxidoreductase because the pyruvate dehydrogenase complex is thermodynamically unable to catalyze this reaction.
Despite all this work, the occurrence of the phosphoroclastic reaction in cyanobacteria was not readily accepted in the literature until 1993, when two groups independently published sequences of the nifJ gene, encoding PFO. The enzyme from Anabaena sp. PCC 7120 was expressed only under Fe deficiency in the growth medium (12), whereas it was constitutive and independent of the Fe content in A. variabilis (192). The sequenced parts of the two nifJ genes showed only a low similarity of ca. 75%, in contrast to the sequences of other genes from the two organisms, which did not differ by more than 5%. The genome sequencing project for Anabaena 7120 then revealed that this cyanobacterium contained two nifJ genes and that the two above-mentioned groups had each sequenced a different nifJ copy. All cyanobacterial PFO sequences cluster with those from strict anaerobes, such as Clostridium or Desulfovibrio (191). However, as shown by the lux reporter system, PFO is expressed both under aerobic growth conditions and in Fe-replete medium in the unicellular, non-N2-fixing Synechococcus sp. PCC 7942. This cyanobacterium and other completely sequenced unicellular cyanobacteria contain only one PFO. Their genomes also contain sequences for phosphotransacetylase and acetate kinase. Acetyl coenzyme A could, therefore, be converted to acetyl-phosphate and then to ATP as a fermentative generation of additional energy. Such ATP generation has, however, never been verified experimentally in cyanobacteria.
Under Fe deficiency conditions, some cyanobacteria synthesize flavodoxin (formerly termed phytoflavin) instead of ferredoxin (221). Despite statements to the contrary (12), flavodoxin effectively transfers electrons to nitrogenase when properly reduced (32). Flavodoxin exists in three redox states, the oxidized, semiquinone, and fully reduced (hydroquinone) forms. Only the hydroquinone/semiquinone couple, with an E0′ of about −500 mV, can transfer electrons to nitrogenase in cyanobacteria (32) and in Azotobacter (235). Reduction of flavodoxin to the fully reduced state does not occur effectively using NAD(P)H [E0′ of NA(P)H/NAD(P)+ = −320 mV], but it can proceed via photosystem I or from pyruvate (E0′ for the pyruvate cleavage ∼ −500 mV). Flavodoxin is constitutive in the nonphotosynthetic aerobe Azotobacter vinelandii (225). It remains to be elucidated under what conditions flavodoxin has a physiological role in cyanobacteria. Fe deficiency is generally not a constraint in nature that demands the expression of flavodoxin. The demonstration of flavodoxin, other flavoproteins, and other ferredoxin-like electron transferring proteins in heterocysts of Nostoc sp. PCC 7120 (162) in non-Fe-limited cultures may indicate that other, still unresolved electron transfer pathways operate in these specialized cells. Similar evidence may be derived from work with Nostoc punctiforme ATCC 29133, where two ferredoxin-like electron transport proteins show a markedly increased abundance together with FNR in heterocysts (163). Flavodoxin was reported to enhance cyclic electron flow around photosystem I in salt-stressed cells (89), which may also occur in N2-fixing heterocysts.
Alternative Nitrogenases in CyanobacteriaThe occurrence of the V nitrogenase in cyanobacteria was first inferred from physiological evidence with A. variabilis (111). Under Mo deficiency and with V in the culture medium, this cyanobacterium reduced significant amounts of C2H2 to C2H6 and also produced much more H2 than Mo-grown cells. Subsequently, Thiel and coworkers performed the molecular characterization in great detail (216). In A. variabilis, the vnfDGKEN genes occur as a cluster, whereas four other H genes, in addition to nifH, are interspersed on the chromosome (Fig. 4). A vnfH gene is located 23 bp from vnfDGK. Either NifH or VnfH can act to complement either Mo or V nitrogenase. Two copies of the H gene exist in Nostoc punctiforme, which does not possess any other genes encoding an alternative nitrogenase (Fig. 4).
Among cyanobacteria, the V nitrogenase has been found only in A. variabilis, in an Anabaena isolate from the fern Azolla (154), in the southern Chinese rice field isolates Anabaena CH1 and Anabaena azotica (26), and recently in one Nostoc strain and two Anabaena strains (141). Anabaena azotica thrives at high temperatures at which Azolla dies. A different expression pattern for the two cyanobacterial nitrogenases, possibly dependent on growth temperature, was suspected (26). In support of this idea, the V but not the Mo nitrogenase of A. vinelandii has been found to be active at lower temperatures (167). However, the specific activities of C2H2 reduction for both Mo and V nitrogenase of A. azotica were found to be the same over a range of temperature and light regimens (26). Thus, the V nitrogenase is unlikely to provide a selective advantage for A. azotica at higher temperatures. Other conditions, such as Mo-deficient microzones around microbial colonies, unusually high W concentrations (which block Mo nitrogenase synthesis), or high alkalinity (pH of ∼10), have been suggested, but not proven, to favor V nitrogenase gene expression (222).
The close sequence similarity of the cyanobacterial vnfDG genes to those of Methanosarcina spp. could indicate an archaeal origin for the alternative nitrogenase similar to that for the Mo enzyme (176). Alternatively, these two groups of organisms with totally unrelated taxonomic affinities may have retained these genes in evolution by chance.
Some physiological evidence has been presented for the existence of the Fe nitrogenase in A. variabilis (112). However, the completely sequenced chromosome of this organism and of more than 30 other cyanobacteria did not reveal genes coding for the Fe nitrogenase, and a nifH vnfH double mutant of Anabaena variabilis did not grow diazotrophically (172). Thus, the evidence, particularly the positive results after hybridization with an anfH probe from Azotobacter vinelandii (112), must indicate the presence of some other sequence-related entity (possibly two other nifH copies [Fig. 4]). In the past, searches for nitrogenases were often based on probing with the nifH gene. However, sequences of anfH are significantly divergent from those of nifH and vnfH (59), and thus possibly a cyanobacterial Fe nitrogenase, say, occurring on a plasmid, may have been missed by probing with the nifH gene.
In waters, cyanobacteria thrive under oxygenic conditions where Fe is generally limiting but Mo or V is abundantly available (63). Those authors suggest that these conditions may favor the expression of Mo or V nitrogenase, whereas the concentration of Fe is possibly too low to allow synthesis of the Fe nitrogenase.
In 1995, two groups independently reported the existence of a second Mo nitrogenase in A. variabilis (193, 217). The "classical" Mo nitrogenase occurs only in heterocysts of this organism. The second Mo nitrogenase is encoded by a separate set of nifHDK genes and is expressed in vegetative cells under anaerobic or, more precisely, low-O2-tension conditions because these cells produce O2 photosynthetically. It resembles, by its expression under anaerobic conditions, the enzyme from the filamentous, nonheterocystous Plectonema (Leptolyngbya) boryanum (209). Its physiological and biochemical properties in A. variabilis have not been studied extensively. The distribution of this enzyme has recently been screened in several cyanobacteria (141).
Nitrogen Fixation in Nonheterocystous CyanobacteriaThe literature on nitrogen fixation in nonheterocystous cyanobacteria up to the mid-1990s was extensively reviewed (14). Therefore, this section concentrates on more recent results.
Many nonheterocystous cyanobacteria can fix N2, but almost all of them do so under anaerobic conditions, or, rather, under conditions of decreased O2 tension. Several of them were shown to separate these two incompatible reactions, with photosynthetic CO2 fixation being performed in the light and N2 fixation in darkness. Thus, at night, nitrogenase is not exposed to the photosynthetically produced O2 and respiration might then utilize most of the O2 of the air to provide anaerobic conditions, especially in dense cultures or in biofilms. However, not all nonheterocystous cyanobacteria show this circadian rhythm. Gloeothece and Synechococcus (Cyanothece) spp. also fix N2 during the day and can grow slowly under continuous illumination. In the oceans, the filamentous Trichodesmium may show a division of labor in which some cells perform photosynthesis whereas others fix N2 (14). However, a recent immunological study (156) revealed that more than 77% of all cells were nitrogenase immunopositive, indicating that Trichodesmium does not develop heterocyst-equivalent cells. Immunological studies indicated that nitrogenase in Plectonema, Gloeothece, and others is also uniformly distributed throughout all cells, thus showing no preferential association with a cell structure (14). Cyanobacteria did not develop O2 protection devices, such as changes in the enzyme's conformation upon exposure to excess O2 as in azotobacteria, production of leghemoglobin as in the rhizobia, or reversible modification of the Fe protein by ADP-ribosylation controlled by the DRAT/DRAG enzymes as in photosynthetic purple bacteria or azospirilla. Their respiratory activity does not seem to be extraordinarily high as in Azotobacter sp. or in heterocysts, to consume all O2 reaching within the cells (66). Thus, N2 fixation in light by these few aerobic cyanobacteria remains an enigma.
Cyanobacterial N2 fixation in the oceans contributes significantly to the global N budget (15, 55, 202). In temperate areas, heterocystous species can form blooms in summer, but they are somewhat unpredictable in time and location, as exemplified for the fresh- and brackish-water species Aphanizomenon flos-aquae and the toxin-producing Nodularia spumigena (143). The major organisms in oceanic N2 fixation in areas of the warmer tropical and subtropical regions of the Pacific Ocean are Trichodesmium sp. and the heterocystous Richelia intracellularis, which lives inside diatoms (74, 206). In other areas of the Pacific Ocean, N2-fixing cyanobacteria, such as Crocosphaera watsonii, and the non-N2-fixing Prochlorococcus marinus thrive in abundance (238). Other nanoplanktonic organisms may be even more important there. Small uncultured cyanobacteria that fix N2 but are unable to perform photosynthetic CO2 fixation and thus O2 evolution have now been recognized (237), and they are particularly active during winter in areas of the Pacific Ocean (117). They have not yet been characterized properly, but their nitrogenase DNA sequences resemble those of the "spheroid bodies" that occur in the fresh water diatoms Rhopalodia gibba and Epithemia sp. (80). These diatoms grow very slowly on agar plates. During the time before the use of molecular biology techniques, physiological experiments demonstrated light-dependent C2H2 reduction by R. gibba even with the rather small amounts of cell material then available (71). More recently, DNA sequencing showed that the spheroid bodies of R. gibba indeed possess the structural nitrogenase genes (173). The spheroid bodies and uncultured marine cyanobacteria either could perform cyclic phosphorylation or may be completely dependent on a supply of both ATP and reductant from organic carbon in the environment. These spheroid bodies, being N2-fixing entities within eukaryotic cells, might attract special attention in the near future for potential applications. They could serve as models in attempts to make plants independent from a supply with combined nitrogen by incorporating an N2-fixing cyanobacterium into their cells.
The discovery of a new group of N2-fixing cyanobacteria may appear to be totally unexpected. As mentioned above, nifH is very much conserved during evolution, and probing with nifH sequences should allow one to detect all N2-fixing microorganisms in environmental samples. Recent studies showed that most of the bacterial DNA sequences from soil (for nifH as well as for nosZ in denitrification and for the 16S rRNA gene for total bacteria) could be detected with the short DNA probes available, but the gene sequences in total were entirely new (60, 180).
HYDROGENASES IN GENERAL
The subject of hydrogenases has been extensively reviewed (226, 228). Therefore, just a few general facts will be mentioned here. There are three classes of hydrogenases: (i) the [FeFe] hydrogenase, (ii) the [FeNi] hydrogenase, and (iii) the methylenetetrahydroxymethanopterin-containing enzyme. The last enzyme is a homodimer, each subunit of which contains a low-spin, redox-inactive Fe atom which is involved in H2 splitting or formation (201, 211, 229). This enzyme has been found only in some methanogenic archaea. In all other hydrogenases, iron occurs in Fe-S clusters.
The [FeFe] hydrogenases have a unique active center (the H cluster) which produces about 100-fοld higher activity than the other hydrogenases (229). The simplest [FeFe] hydrogenase occurs in green algae with only the H cluster as the prosthetic group (91). The H cluster contains two Fe atoms and the two ligands CO and CN−, which are attached to both of the Fe atoms. In green algae, the H cluster is directly reduced by ferredoxin. All other [FeFe] hydrogenases contain a relay of additional FeS centers (both 4Fe-4S and 2Fe-2S clusters) that are involved in electron transfer from the external electron source (reduced ferredoxin) to the H cluster deep inside these monomeric proteins. They possess hydrophobic channels from the surface to the active site (the H cluster) that provide access for protons and the egress of H2. [FeFe] hydrogenases function mostly in the disposal of excess reductant generated during fermentation under anaerobic conditions. However, the periplasmic [FeFe] hydrogenase of Desulfovibrio vulgaris is involved in the utilization of H2 in sulfate reduction (171). The enzyme occurs in anaerobes, such as the genera Clostridium and Desulfovibrio, and in eukaryotes (in chloroplasts of green algae or in hydrogenosomes). It has not been detected in cyanobacteria. This is true also for those cyanobacteria that synthesize starch (semiamylopectin) and could therefore be considered ancestors of chloroplasts (150). The evolutionary origin of the [FeFe] hydrogenase of green algae is a mystery yet to be resolved (132, 146).
The majority of hydrogenases in prokaryotes are Ni-containing enzymes. The core enzyme is an αβ heterodimer where the larger subunit, of ca. 60 kDa, possesses the deeply buried binuclear NiFe active site (Fig. 6). The Fe in this center binds two CN− and one CO. The whole cluster is ligated to the protein by the thiolate groups of four cysteines. The smaller subunit, of ca. 30 kDa, harbors FeS clusters (up to three) which serve to transfer electrons from or to the NiFe active site. As in the [FeFe] hydrogenases, there are hydrophobic channels from the active site to the surface of this globular αβ dimer. The Ni hydrogenases have a high affinity (low apparent Km) for H2, indicating that they act mostly in utilizing H2 in the different organisms. Indeed, they are often linked to nitrogenase, where they serve to utilize the H2 produced in N2 fixation. They are often membrane bound and feed electrons into the respiratory chain via either ubiquinone or a cytochrome at respiratory complex III. Often they are synthesized with a long signal peptide of 30 to 50 amino acid residues which is cleaved off when the hydrogenase is folded and incorporated into the membrane. They may be subdivided into four groups by their functions (227, 228).
(A) Prosthetic group of [NiFe] hydrogenases in the oxidized, inactive form (Ni-A state [228]). (B) Upon reduction, it is converted to the active form (Ni-S state). (Adapted from reference 228, where further details can be found.)
In the oxidized form, [NiFe] hydrogenases are inactive due to a bridging hydroxo ligand between the Ni and Fe atoms (Fig. 6), and the different enzymes vary in their sensitivity to O2. When reduced, this ligand is removed by conversion to water, with the simultaneous reduction of Ni3+ to Ni2+. The enzyme can then bind H2, probably at the Fe atom, and is then able to catalyze the heterolytic cleavage to 2H+ + 2e−. Details of this enzymatic mechanism have been depicted previously (228). Remarkably, none of the [NiFe] hydrogenases transfers electrons to ferredoxin or to another low-potential electron carrier. The structure/function relationship of anaerobic gas-processing metalloenzymes has recently been summarized (73).
The biosynthesis of hydrogenase, including the synthesis of the metallocenter and the incorporation of the CO and CN− ligands, has been studied extensively for hydrogenase 3 from E. coli by Böck and colleagues in Munich and has been reviewed (226, 228). The concentration of H2 in cells is sensed by hupUV gene products, which in other organisms are termed HoxBC. These proteins also catalyze the cleavage of H2 and can therefore be considered an independent, regulatory hydrogenase, e.g., in Ralstonia eutropha (79).
HYDROGENASES IN CYANOBACTERIA
Hydrogenase Types in CyanobacteriaThe subject of hydrogenase types in cyanobacteria has been repeatedly reviewed (7, 81, 82, 91, 99, 134, 194, 199, 212, 214, 222). The reader is particularly referred to the very detailed and elaborate review by Tamagnini et al. (214). Cyanobacteria contain two different Ni hydrogenases defined by their physiological role as either an uptake or a bidirectional (reversible) enzyme. There is no evidence for an H2-sensing regulatory hydrogenase encoded by hupUV. Cyanobacterial hydrogenases do not contain Se as do some hydrogenases in anaerobic bacteria.
Uptake hydrogenase.The uptake hydrogenase is encoded by the contiguous and cotranscribed genes hupSL and is associated with nitrogenase functioning. Generally, intact N2-fixing cyanobacteria show very little net H2 production due to the efficient recycling of the gas by uptake hydrogenase. This H2 consumption proceeds by the respiration- and photosystem I-dependent pathways (33). In cyanobacteria, respiration and photosynthesis share the cytochrome bc complex (respiratory complex III), from where the electrons are allocated either to the donor side of photosystem I to generate reduced ferredoxin or to respiratory complex IV accompanied by O2 consumption. Factors that control electron allocation to either photosystem I or respiration in light-grown cyanobacteria have not been elucidated. Likewise, the electron entry from H2 and uptake hydrogenase to either the plastoquinone pool or a cytochrome b, as in Xanthobacter autotrophicus (184) and presumably in Bradyrhizobium japonicum (65), is not known in cyanobacteria. Transcription starts before hupS and terminates immediately after hupL; thus, the electron acceptor is not cotranscribed on this operon. The enzyme does not couple with any other electron carrier with a redox potential more negative than −300 mV, which explains its unidirectional physiological function and name. Uptake hydrogenase is membrane bound and has never been characterized in the homogeneous form. Recent immunological studies confirmed its association with the thylakoid membranes of three cyanobacterial strains (195), which corroborates earlier studies with thylakoid preparations (reviewed, e.g., in reference 164). The sequences indicate that the larger subunit (HupL) has a molecular mass of about 60 kDa and that the smaller one (HupS) is about half that size.
In accordance with the postulates of Dixon (58), which were developed for Rhizobium nodules, H2 utilization in cyanobacteria likely functions (i) to remove O2 from the nitrogenase site via the respiratory oxyhydrogen (Knallgas) reaction, (ii) to regain ATP inevitably lost in H2 production during nitrogenase catalysis, and (iii) to prevent a deleterious buildup of a high concentration of H2 which affects nitrogenase activity. Such a situation might apply particularly to heterocysts. In addition, H2 uptake might provide additional reductant for N2 fixation, photosynthesis, and other reductive processes.
Rather simple physiological experiments, performed in student courses in the Cologne laboratory over the years, show that N2 fixation (C2H2 reduction), e.g., by Anabaena variabilis, is much less sensitive to exposure to O2 when the assay mixtures are supplemented with exogenous H2 (29, 34). Uptake hydrogenase-deficient mutants of several cyanobacteria produce roughly three times more H2 than wild-type cells (for references, see reference 214). However, their growth rates under N2-fixing conditions are essentially the same (125).
In other bacteria, a twin-arginine signal peptide at the N terminus and a hydrophobic motif, both presumably involved in translocation and anchorage, are typical for many membrane-bound hydrogenases. Such motifs are missing from the cyanobacterial HupS and HupL, which also do not contain signatures indicative of membrane insertion. As in other organisms, however, HupL contains the C-terminal extension that is cleaved off at the last step of maturation by a specific endopeptidase encoded by hupW.
In approximately half of the heterocystous strains (21, 213), hupL undergoes a rearrangement during the late state of heterocyst differentiation before it can be transcribed. The excision of the 9.5-kb element is catalyzed by the recombinase XisC, with its gene located on this element. XisC is sufficient to catalyze the site-specific recombination in hupL (43). The physiological advantage of such a site-specific recombination is not obvious. Of the two best-studied heterocystous cyanobacteria, Anabaena (Nostoc) sp. strain PCC 7120 shows this gene rearrangement but Anabaena variabilis ATCC 29413 does not.
Uptake hydrogenases occur in almost all N2-fixing microorganisms except for some Rhizobium strains (35, 36) and Herbaspirillum seropedicae (F. Pedrosa, personal communication). In cyanobacteria, the enzyme is present in all N2-fixing species with the exception of an N2-fixing unicellular strain, Synechococcus sp. strain BG 043511 (132), and some Chroococcidiopsis isolates (see below). No uptake hydrogenase and none of its genes have been unambiguously detected in non-N2-fixing cyanobacteria. It is not clear whether an uptake hydrogenase is expressed in parallel with the second Mo nitrogenase which is active in vegetative cells of A. variabilis upon transition to anaerobiosis. Low transcript levels of hupSL have been reported for A. variabilis ATCC 29413 cells grown in the presence of ammonia (231).
The formation of hupSL transcripts may be controlled by factors such as Ni availability, anaerobiosis, the presence of H2, and the absence of combined nitrogen, among others, and may proceed in parallel with heterocyst formation (92, 98, 231). The transcriptional regulator NtcA, which controls cyanobacterial genes involved in nitrogen metabolism, has also been reported to regulate hupSL expression (231). The NtcA binding site was identified 427 bp upstream of the transcriptional start site of hupSL in A. variabilis, whereas most other NtcA binding sites are located not more than 40 bp from the start site (231). The NtcA binding site identified in Nostoc punctiforme ATCC 29133 is TGTN9ACA, which differs from the optimal one, GTAN8TAC, and this might therefore indicate only weak binding (98). A shorter promoter fragment, covering 57 bp upstream of and 258 bp downstream of the transcription start site, was enough for high heterocyst-specific expression of hupSL independent of NtcA (98). Surprisingly, hupSL expression in A. variabilis ATCC 29143 was not regulated by H2 (231). This is in sharp contrast to the situation in Nostoc punctiforme and N. muscorum (10). In addition to transcriptional regulation, uptake hydrogenase synthesis could also be controlled at the posttranslational level. This enzyme, but not bidirectional hydrogenase, is activated by thioredoxin (164). N2 fixation by cyanobacteria is largely light stimulated due to the demand for reductant (reduced ferredoxin) and ATP. Therefore, activation of the uptake hydrogenase by photosynthetically reduced thioredoxin makes sense physiologically because more H2 is produced by nitrogenase in the light than in darkness. The number of proteins activated by thioredoxin is high in cyanobacteria and chloroplasts, but the target enzymes differ in the two entities (124).
Other transcription and translation cues will undoubtedly be resolved in the near future to further understanding of the signal cascade involved in the synthesis of the uptake hydrogenase. The currently available data suggest that different cyanobacteria differ markedly in their patterns of expression of this protein.
Bidirectional hydrogenase.After extensive controversy, the work of Houchins and Burris (100, 101) clearly showed that N2-fixing cyanobacteria may contain another hydrogenase in addition to the uptake enzyme. This reversible, bidirectional hydrogenase, which catalyzes both H2 uptake and reduced methyl viologen-dependent H2 evolution, was separated from the unidirectional, uptake enzyme in crude extracts of Anabaena (Nostoc) sp. strain PCC 7120. Later, molecular biological characterization showed that the bidirectional hydrogenase in cyanobacteria is, surprisingly, a NAD(P)H-dependent enzyme (187). This finding had been marked as a milestone in cyanobacterial hydrogenase research (212). The enzyme has a pentameric structure encoded by the genes hoxEFUYH in A. variabilis. HoxYH constitutes the hydrogenase, which contains the motifs for binding both Ni-Fe-S and Fe-S centers. HoxFU is the diaphorase part that transfers the electrons to NAD(P)+ and possesses binding sites for NAD(P)+, flavin mononucleotide (FMN), and Fe-S centers. The enzyme complex contains a further HoxE subunit, which copurifies with the active bidirectional enzyme (188). The hoxE gene possesses a motif for binding an Fe center and was therefore thought to couple the enzyme to the respiratory and photosynthetic electron transport chain on the thylakoids and also possibly at the cytoplasmic membrane. However, the role of the hoxE gene product has not been resolved yet, despite extensive research.
In organisms other than cyanobacteria, a pentameric NADH-dependent bidirectional hydrogenase is present in Thiocapsa roseopersicina (174) and in Allochromatium vinosum (108, 127). The best-studied bidirectional hydrogenase, the NADH-dependent enzyme from Ralstonia eutropha, is encoded only by hoxFUYH (75).
The locations of the five structural genes hoxEFUYH on the chromosome differ from one cyanobacterium to the next (25, 199, 214). In some cyanobacteria, they are clustered on one part of the chromosome, though interspersed with ORFs at different positions. In others, they occur in two different parts of the genome separated by several kilobases of intervening DNA. Similar to the case for HupL of the uptake hydrogenase, HoxH of the bidirectional hydrogenase undergoes maturation at the C terminus catalyzed by a specific endopeptidase encoded by hoxW. The expression of hydrogenase genes in Synechococcus sp. PCC 7942 is under the control of the circadian clock, as shown for two promoters of the gene cluster (186). When expressed, the native protein might function as a dimeric assembly complex Hox(EFUYH)2 (188). In extracts, it catalyzes both NAD(P)+-dependent H2 uptake and H2 evolution with NADP(P)H as the electron donor (190).
Bidirectional hydrogenase is widespread in cyanobacteria. It is present in unicellular, filamentous, and heterocystous species, where it occurs in both heterocysts and vegetative cells (213). The enzyme is apparently not present in marine cyanobacteria isolated from the open ocean (132). It is expressed independently of N2 fixation and thus is present in cells grown aerobically and with combined nitrogen. However, it is more O2 sensitive than uptake hydrogenase, probably due to oxidation to its inactive state (51). When reduced, it can be purified as a pentameric complex (188).
The regulation of the expression of the bidirectional hydrogenase in cyanobacteria differs with the physical location of the hox genes on the chromosome in the species. In Synechococcus sp. PCC 7942 (= Anacystis nidulans), the genes are organized into two clusters, hoxEF and hoxUYHWhypAB, and are regulated by three promoters, one before each of hoxE, hoxU, and hoxW (23, 186). In Synechocystis sp. PCC 6803, the hoxEFUYH genes are cotranscribed, with the transcription start point located 168 bp upstream of the start codon (87, 158). Taking the high diversity of the different cyanobacterial species into account, expression of the bidirectional hydrogenase in cyanobacteria seems to be species specific.
Over the last several years, significant progress has been achieved in the identification of the transcription factors regulating the expression of bidirectional hydrogenase, and details of the subject are found in a very recent review (159). NtcA does not seem to be the transcriptional activator, but a LexA-related protein (87, 158) and two members of the AbrB-like family (157) appear to be activators. In other organisms, LexA activates the expression of a cascade of genes coding for enzymes involved in either DNA repair or carbon starvation. A LexA-depleted mutant of Synechocystis sp. 6803 had lower hydrogenase activity than the wild type, indicating that LexA operates as a transcription activator of hox genes in this cyanobacterium (87). The binding site of LexA upstream of hoxE of Synechocystis sp. PCC 6803 is, surprisingly, not clear (214). LexA may bind to a region from bp −198 to −338 from the translational start point (158), to the region from bp −592 to −690 bp from the hoxE start codon (87), or to both regions (159). The two distant LexA binding regions in the hox promoter could indicate the occurrence of a DNA loop involved in gene transcription (86, 159), which warrants experimental proof. LexA may act as mediator of the redox-responsive regulation of hox gene expression (5). In Synechocystis sp. strain PCC6803, LexA binds as a dimer to 12-bp direct repeats containing a CTAN9CTA sequence in target genes (170).
Abr proteins act as transcription factors of antibiotic resistance in organisms other than cyanobacteria. An AbrB-like protein (sII0359) was recently shown to interact specifically with the promoter region of the hox genes and with its own promoter region (157). Whereas this AbrB-like protein works as a transcription activator in Synechocystis sp. PCC 6803, another one of these regulator proteins (sII0822) acts as repressor of the hox gene expression, because they were significantly upregulated in a completely segregated ΔsII0822 mutant (105). This transcription factor works in parallel to, but apparently independently from, the long-known nitrogen transcriptional control element NtcA (97) in the regulation of the expression of genes coding for nitrogen assimilation enzymes (105).
The cyanobacterial transcription factors, the LexA- and AbrB-like proteins, show significant divergences in their sequences and functions from the counterpart proteins in other organisms, and their activity may be regulated by posttranscription modifications (159). They are members of an apparently complex signal cascade that directs the expression of the bidirectional hydrogenase genes. Their expressions and interactions in responses to environmental cues might be a subject of extensive research in the near future (159). The identification of other transcription factors of bidirectional hydrogenase is to be expected (116).
Besides its inactivation by O2 and a non-light dependence (51), bidirectional hydrogenase seems to be activated by H2 on the transcriptional or translational level or even on both. The effects of H2 on bidirectional hydrogenase synthesis are not understood and appear to vary with the organism and the culture conditions employed. In some cases, high hydrogenase activity could be the result of bacterial contamination of slime-forming cyanobacterial cultures.
The biosynthesis and maturation of the [NiFe] hydrogenase have been characterized for the enzyme from E. coli (20). The hyp genes required for the synthesis of the hydrogenase are similar in E. coli and cyanobacteria and are scattered throughout the genomes of those cyanobacteria in which their occurrence was examined (reviewed in reference 214). Both uptake and reversible hydrogenases appear to utilize the same hup gene products for their biosynthesis. However, the last step, the maturation at the C terminus by endopeptidase, seems to be specific for the two enzymes, with HupW catalyzing the final cleavage of uptake hydrogenase and HoxW involved in processing the bidirectional enzyme (233). Both endopeptidases are transcribed from their own promoters (67) and are under similar regulatory control as the hydrogenases they cleave (54).
In contrast to uptake hydrogenase, the bidirectional enzyme is soluble after breaking cyanobacterial cells. The exact location of the enzyme inside the cells is unknown (Fig. 7). Immunological (109) and membrane solubilization (110) studies indicated a location at/on the cytoplasmic membrane in Anacystis nidulans (Synechococcus PCC 6301). Other researchers with different antibodies found a location in the cytoplasm, with some preferential association to the thylakoids (213, 214). However, all these investigations with antibodies were performed before the true nature of the hydrogenase as a pentameric NAD(P)H-dependent complex was recognized. Clearly, this issue needs to be reexamined with newly raised antibodies.
Possible coupling of the bidirectional hydrogenase to the cytoplasmic membrane in cyanobacteria. The HoxE subunit may serve as a device for coupling to the membrane, but this has not been verified experimentally. Solubilization experiments indicate that the bidirectional hydrogenase is loosely membrane bound (110).
The physiological function of this constitutively expressed bidirectional hydrogenase in photosynthetic, aerobic cyanobacteria has been hotly debated but remains controversial. Work with mutants of Anabaena (Nostoc) sp. PCC 7120 (139) showed that the bidirectional hydrogenase is unable to support N2 fixation. Its high affinity (low apparent Km value) for H2 suggests that the enzyme functions in H2 utilization under physiological conditions (99). Indeed, H2 uptake catalyzed by the bidirectional hydrogenase can support photosynthetic reactions such as CO2 fixation and also, to some extent, nitrite or sulfite reduction (215). The rates of these reduction reactions with H2 as the only electron donor are low, however, compared to these same photosynthetic activities with H2O as the electron source (31). Bacteria, such as Ralstonia eutropha or Xanthobacter autotrophicus (185), are able to grow autotrophically with H2 as the sole source of reductant and energy, and some of them, such as Bradyrhizobium japonicum, can do so even under N2-fixing conditions (204). H2-dependent growth in darkness has never been demonstrated for any cyanobacterium. Such anoxygenic growth is possible when energy is provided by cyclic photophosphorylation and when the electrons are provided from Na2S or H2S in some cyanobacteria, such as Oscillatoria limnetica (78). However, to our knowledge, H2- and photosystem I-supported growth has not yet been demonstrated in cells of Anabaena, Nostoc, or other autotrophic unicellular species when photosystem II is impaired by use of dichlorophenyldimethylurea (DCMU). In the two facultative anoxygenic cyanobacteria Oscillatoria limnetica and Aphanothece halophytica, however, H2 was described to substitute for H2S in supporting CO2 fixation in a photosystem I-driven reaction (13).
In all organisms, the respiratory complex I consists of at least 14 subunits, but only 11 in the cyanobacterial NADPH-dehydrogenase complex I have as yet been identified. The diaphorase genes hoxEFU show high sequence homologies to the missing three genes. Although it has been suggested that the hoxEFU gene products are used simultaneously by both the bidirectional hydrogenase and respiratory complex I (189), the experimental evidence is against this suggestion. Mutants with mutation either in hoxF (102) or in hoxU (22) do not show bidirectional hydrogenase activity but have unimpaired respiratory activity. Furthermore, Nostoc PCC 73102 has no bidirectional hydrogenase activity at all but respires with rates comparable to those of other cyanobacteria (22). This could mean that cyanobacterial respiration partly circumvents respiratory complex I and utilizes the succinate dehydrogenase complex instead, as may be inferred from studies with mutants (49). Then the fate of the NAD(P)H generated in carbon catabolism has to be determined. The electron input pathway into respiratory complex I in cyanobacteria remains unknown (11).
Some authors consider the bidirectional hydrogenase to work in the transition from anaerobiosis in the dark to aerobic conditions in the light (6, 51, 88, 132). In order to avoid an overload of reducing equivalents, the organisms react to dispose of the excess by generating a burst of H2 via photosynthetic electron transport, ferredoxin, FNR, NADPH, and hydrogenase. Such sudden H2 production that lasts for only seconds up to few minutes, has been observed repeatedly. However, the physiological relevance of this observation is questionable, because the sun does not rise so suddenly in the morning that it overreduces soil cyanobacteria. Furthermore, in aqueous habitats, turbulences are hardly so effective that they expose cyanobacteria to extremely high light intensities within a very short time scale. Cyanobacteria may, however, be overreduced when continuously exposed to too bright a light on a very sunny day and then be forced to use hydrogenase as a valve for disposing of the excess of photosynthetically produced reductants, as shown in laboratory cultures of Anabaena cylindrica (119).
As stated in an extensive review (207), the majority of cyanobacteria are obligate photoautotrophs. Only few species are able to grow chemoautotrophically at the expense of a limited number of organic carbon compounds, and they do so with O2 as the terminal respiratory electron acceptor. Anaerobic chemoorganic growth is exceptional in cyanobacteria. Thus, most species accumulate glycogen in the light, which they then have to degrade in darkness. Glucose residues from glycogen are utilized via the oxidative pentose phosphate pathway, finally resulting in pyruvate (208). Its further degradation is hampered by the fact that the tricarboxylic acid cycle is incomplete in cyanobacteria because neither an oxoglutarate dehydrogenase complex nor an oxoglutarate:ferredoxin oxidoreductase is present (208), which has been confirmed by recent large-scale proteomic studies (162, 163). This prevents the complete degradation of the C2 moiety to CO2 and NAD(P)H. Cyanobacteria apparently prefer to utilize NADP+ rather than NAD+ in catabolism (51), since several enzymes, such as isocitrate dehydrogenase (165) and glyceraldehyde-3-phosphate-dehydrogenase (166), are NADP+ rather than NAD+ dependent. In darkness, most cyanobacteria have to generate their energy via the oxidative pentose phosphate pathway: pyruvate, pyruvate:ferredoxin oxidoreductase, reduced ferredoxin, FNR, and NADPH (Fig. 8). By using the lux reporter system, it was shown that the pyruvate:ferredoxin oxidoreductase is constitutively expressed, even in aerobically grown A. variabilis (191). In dense cultures, biofilms, mats, or cyanobacterial blooms, the amount of O2 may rapidly become insufficient to oxidize all NAD(P)H by respiration. Thus, the NADPH generated via pyruvate:ferredoxin oxidoreductase and FNR must then be reoxidized via the bidirectional hydrogenase in order to avoid overreduction in the cells. The generation of H2 (E0′ = −420 mV for H2/2H+) from NAD(P)H [E0′ = −320 mV for NAD(P)H/NAD(P)+] is thermodynamically unfavorable. It requires a 1,000-fold excess of reduced pyridine nucleotides, but this can rapidly be generated in dark-kept cells under anaerobic conditions. To prevent overreduction of the cells during the night, reducing equivalents must be disposed of as H2 (Fig. 8). Similar to the case for pyruvate:ferredoxin oxidoreductase, bidirectional hydrogenase is also constitutively expressed under aerobic growth conditions. When cyanobacteria such as Synechocystis, Anabaena, or Nostoc sp. are transferred to darkness and anaerobiosis, H2 production begins immediately without a distinct lag phase. High hydrogenase activity under anaerobic conditions was described long ago (99), and an increase in the hoxH (67, 68) or hoxEF (116) transcription levels during dark periods was recently detected in different cyanobacteria.
Roles of bidirectional and uptake hydrogenases in cyanobacterial hydrogen metabolism. Bidirectional hydrogenase is active mainly in the dark and under anaerobic conditions to dispose of reductants, whereas uptake hydrogenase functions in recycling the hydrogen lost during nitrogen fixation.
Thus, cyanobacteria might have retained the genes coding for these enzymes (hydrogenase and pyruvate:ferredoxin oxidoreductase) of anaerobes because of their obligatory autotrophy (of many species). The essential role of hydrogenase during fermentation of cyanobacteria has also been suggested by others (224). As recently shown (218), cyanobacteria contain one petH gene that encodes two isoforms of FNR, one of which accumulates under heterotrophic conditions. It needs to be shown whether the latter is specifically involved in the fermentative degradation of pyruvate. The same question also applies to the two isoforms of pyruvate:ferredoxin oxidoreductase in heterocystous species. As mentioned above, acetyl coenzyme A formed in pyruvate fermentation may be converted to ATP by phosphotransacetylase and acetate kinase, but this also remains to be shown. ATP formation by this pathway must be accompanied by the formation of acetate, but the fate of any acetate produced remains unknown.
In photosynthetic eukaryotic algae, hydrogenase is located in plastids (210). The ancestors of plastids are believed to be organisms similar to the filamentous, heterocyst-forming, N2-fixing species of class IV of the cyanobacteria, related to the current Nostoc or Anabaena spp. (53). If so, it is surprising that, during evolution, plastids have lost not only N2 fixation genes but also both gene sets that encode the bidirectional and uptake hydrogenases. When hydrogenase occurs at all in plastids, it is an [FeFe] hydrogenase of a completely unknown origin.
Similarly, it is totally unclear how both hydrogenases have been acquired by cyanobacteria from bacteria over evolutionary time. With respect to photosynthetic bacteria, the green nonsulfur bacterium Chloroflexus aurantiacus possesses both uptake and bidirectional hydrogenases, which has led to the assumption that a Chloroflexus-like bacterium is the ancestor of C. aurantiacus and cyanobacteria (132). On the other hand, the first phototrophs may have been anoxygenic procyanobacteria from which the Chlorobiaceae, Heliobacillaceae, Chloroflexaceae, purple sulfur bacteria, and cyanobacteria descended in parallel and independently of each other (148). The gene sets of both cyanobacterial hydrogenases may have been acquired vertically or laterally. A lateral gene transfer is particularly difficult to conceive for the bidirectional enzyme because its genes may be scattered throughout the genome of a species. Similarly, the loss of hydrogenase from one cyanobacterial isolate but not from another may be difficult to explain.
The unicellular cyanobacterium Chroococcidiopsis sp. (Fig. 9 A to C) is regarded as a fossil relict which may have properties related to those of the first O2-evolving cyanobacterium developed some 3 × 109 years ago (69). Chroococcidiopsis is being proposed as the organism best suited to go on exploratory missions to Mars (48). Today, Chroococcidiopsis thrives at sites with extremely hostile conditions (24). The strains Chroococcidiopsis thermalis ATCC 29380 (1) and CALU 758 (197) were found to possess the bidirectional, but not the uptake hydrogenase and to fix N2 (reduce C2H2) under microaerobic conditions. However, experiments performed in the Cologne laboratory (106) showed that the hydrogenase activities of Chroococcidiopsis sp. strain PCC 7203 exhibited some unusual features. Southern hybridizations and PCR experiments with probes from hupL and hoxH, hoxF, or hoxE developed from A. variabilis sequences indicated the presence of the bidirectional hydrogenase but the absence of the uptake enzyme in Chroococcidiopsis PCC 7203. In this cyanobacterial strain, H2 and the bidirectional hydrogenase can support nitrogenase activity (C2H2 reduction) but only at a rather low concentration of 0.3 to 0.5% O2 in the gas phase. Above that concentration, O2 is completely inhibitory, presumably by oxidizing the NiFe center of the enzyme to its inactive oxidized state or (less likely) by affecting an extremely O2-sensitive nitrogenase in this organism. In more than 100 different experiments performed in air-free vessels, about 50% showed no H2-supported C2H2 reduction activity, whereas the outcome was positive in the other half. However, the optimal O2 concentration was 0.3% in one experiment and 0.5% in the next, depending on the concentration of cells in the assay vessels, the photosynthetic O2 production activity of the cells, and the success in getting the vessels air free. The activity in the positive experiments must come from bidirectional hydrogenase, since any uptake hydrogenase is not so sensitive toward O2. No C2H2 reduction activity was seen in the dark. The results indicate that the bidirectional hydrogenase of Chroococcidiopsis PCC 7203 can only poorly protect nitrogenase from damage by O2. Thus, the bidirectional hydrogenase may be a fossil relict together with the organism itself. In early geological times, it may have served in fermentation and may have effectively supplied reducing equivalents to nitrogenase. However, when the concentration of O2 in the atmosphere rose above 0.3 to 0.5%, bidirectional hydrogenase may have been inactivated. Then, heterocysts that could better accommodate and protect their nitrogenase had to be developed. Indeed, Chroococcidiopsis has been discussed as an ancestor of heterocyst-forming species (69).
Chroococcidiopsis sp., as isolated from the gypsum rock "Sachsenstein" near Bad Sachsa, Harz Mountains, Germany (24). This cyanobacterium is regarded as a fossil record ancestor of heterocystous cyanobacteria (69) (see the text). It now occupies ecological niches such as the fissures in gypsum, where it might be exposed to light intensities that are low but still sufficient for photosynthesis. It forms packages of 16 cells or multiples thereof (A). The gypsum shards can easily be peeled off by hand (B), and the greenish-blue layer consisting almost exclusively of Chroococcidiopsis below the shards then becomes visible (C).
POTENTIAL FOR EXPLOITING CYANOBACTERIA IN SOLAR ENERGY CONVERSION PROGRAMS FOR PRODUCTION OF COMBUSTIBLE ENERGY (HYDROGEN)
Of all organisms, cyanobacteria have the simplest nutrient requirement in nature. They thrive photoautotrophically on simple inorganic media, and many of them do not need combined nitrogen in their medium. They can be grown with a reasonably fast generation time of 2 to 3 h for unicellular forms (though not as fast as fermentative bacteria, such as E. coli, where the half-life [t1/2] can be close to 10 min). A laudable goal is to generate clean energy, without generating greenhouse gases such as CO2 or NOx, by exploiting the photosynthetically produced reductant (ferredoxin) for H2 production. To do so demands the separation of the photosynthetically produced O2 from H2 production. Research in this area started around 1973 during the first global energy crisis and has found renewed interest currently due to the concerns over global warming. Success in this area demands the continuous production of H2 over weeks or months, followed by effective utilization of the cyanobacterial cells produced. Cyanobacterial proteins are not optimal to feed to cattle but can be used as dietary supplements with various positive effects for humans and animals (77, 114). One obstacle is that neither cyanobacterial hydrogenase couples with the reduced ferredoxin generated photosynthetically. Presumably based on their own research interests, different researchers favor the use of either hydrogenase or nitrogenase in solar energy conversion programs.
A comparison of the published rates of H2 formation suffers from the fact that different laboratories refer their data to different units. As a basis for comparing the various results, the following gross estimates are made (Table 1). In all photosynthetic organisms, chlorophyll a constitutes 1 to 2% of the dry weight. Taking the average of 1.5%, the cyanobacterial dry weight can be estimated by multiplying the chlorophyll a content by a factor of 67 (http://www.chebucto.ns.ca/ccn/info/Science/SWCS/DATA/PARAMETERS/CHA/cha). Moreover, chlorophyll a has a molecular weight of slightly less than 1,000, and 1 mg of chlorophyll corresponds to 20 to 25 mg of cell protein (20 mg is used here). In photosynthesis, the unit commonly used since the time of Willstätter and Stoll (232) is mg chlorophyll per h. The C/N ratio is around 6 in cells, and the maximal photosynthetic CO2 fixation rates are roughly 100 μmol/h·mg chlorophyll. Thus, the N2 fixation rate is unlikely to exceed 20 μmol NH4+ produced/h·mg chlorophyll. If all electrons transferred to nitrogenase were reallocated to reduce H+, H2 production by cyanobacteria would be around 40 μmol H2 produced/h·mg chlorophyll, based on the fact that four electrons are needed for NH4+ production (with concomitant H2 evolution) but only two electrons are needed for H2 formation (equations 1 and 2). The data in Table 1 also use a gas molar volume of 24 liters at 25°C.
Examples of published rates of cyanobacterial H2 formation
On the basis of the considerations described above, the few significantly higher activities reported in the literature (Table 1) seem to require reassessment. If the experiments were not done with great care, less than total chlorophyll could be released from the cyanobacterial cells, which would lead to the high activities reported. With artificial photosystem I/Pt or Au nanoparticle biconjugates, maximal H2 production activities were 49 μmol/mg chlorophyll·h (85), which are in the same range as the theoretically achievable formation with cyanobacteria.
To overcome the problem that cyanobacterial hydrogenases do not couple with ferredoxin, the clostridial, ferredoxin-dependent hydrogenase I was heterologously expressed in the unicellular cyanobacterium Synechococcus PCC 7942 (8). Cell extracts of the genetically engineered isolate showed about 3-fold-higher activity than the wild type. An alternative genetic approach was to modify the photosystem I PsaE subunit from Thermosynechoccocus elongatus so that it linked to the O2-insensitive membrane-bound hydrogenase of Ralstonia eutropha and PSI from Synechocystis sp. PCC 6803 (104). This artificial hydrogenase-PSI complex displayed light-driven H2 production, but only at low rates, and this activity was suppressed by ferredoxin and FNR (104). The latter problem was circumvented by modifying the ferredoxin-binding site of PsaE (103). There have been other attempts with limited success to express a foreign hydrogenase in cyanobacteria (8) or a cyanobacterial hydrogenase in a foreign organism (135). Approaches with cyanobacteria are based on the assumption that the membrane-bound [NiFe] hydrogenases from Ralstonia eutropha, R. metallidurans, Allochromatium vinosum, or others are more O2 tolerant than the cyanobacterial enzymes (64). Since both the bidirectional and uptake hydrogenases of cyanobacteria have never been biochemically characterized in the pure form, this assumption may not necessarily be true, particularly for the bidirectional hydrogenase. This enzyme, with its complex of five HoxEFUYH subunits, may easily fall apart upon purification, and not necessarily due to any inferred O2 lability. The current state of attempts to develop heterologous and recombinant expression of hydrogenases for improving H2 formation by organisms has been summarized and reviewed (64, 131).
In intact cyanobacterial cells, H2 produced by nitrogenase is more or less completely recycled by hydrogenase so that often almost no net H2 production is detectable. Uptake hydrogenase, but not the bidirectional enzyme, is effective in recycling the gas (139). Mutants defective in uptake hydrogenase show a much higher H2 production than wild-type cells. This was shown some years ago with mutants of Anabaena variabilis obtained by classical N-methyl-N′-nitro-N-nitrosoguanidine (NTG) mutagenesis (147) and more recently with strains that were defective in uptake hydrogenase due to site-directed mutagenesis (92, 140).
As recently published (34), Anabaena variabilis and A. azotica produce large amounts of H2 when incubated under high concentrations of H2 and C2H2 (Fig. 10 A). This H2 production, on top of the H2 added, is higher in V- than in Mo-grown cultures of A. azotica (34). The amount of H2 formed increases and C2H4 production decreases in parallel with the concentration of H2 added to the vessels (Fig. 10B). In line with these findings, a 2- to 4-fold increase of light-induced H2 production was observed in Nostoc muscorum preincubated under argon and H2 (182). Although added C2H2 is known to inhibit the uptake hydrogenase (205), this observation does not explain the effect of increasing amounts of H2. The effects of H2 and C2H2 on nitrogenase itself and/or photosynthetic electron flow to nitrogenase cannot mechanistically be explained as yet. However, the meaning of these findings is that all electrons coming to nitrogenase can be directed to produce H2, particularly in V-grown cells. The rate of ∼40 μmol H2 produced reflects the maximal photosynthetic H2-forming potential of cyanobacterial suspension cultures.
(A) H2 production by Anabaena azotica (V or Mo grown) and A. variabilis. The lower parts of the columns indicate the amount of H2 added to the vessels by syringes and determined by gas chromatography at the start of the experiments. The gas phase was 85% argon and 15% C2H2 (vol/vol). Complete means gas-phase H2 (about 1 bar). (B) Inhibition of C2H2 reduction by increasing concentrations of H2 added to the assays, using Mo-grown A. azotica. The inhibition pattern was the same for V-grown A. azotica and for Mo-grown A. variabilis (not shown). The data are from reference 34.
Such an interpretation of the data indicates that further genetic engineering of cyanobacteria, either by transferring an alien hydrogenase or nitrogenase or by genetically manipulating the acceptor side of photosystem I, is unlikely to enhance the rate of cyanobacterial H2 production. The compilation of the data in Table 1 shows that maximal H2 production in suspension cultures is already achieved by coupling either nitrogenase or hydrogenase to the cyanobacterial photosystem I. A temporal separation of the photosynthetic organic carbon formation (glycogen) in light followed by a fermentative degradation of these carbohydrates in the dark (3) is unlikely to enhance H2 production rates, although it would separate H2 and O2 production from each other. Apart from this, rates of H2 production in strict fermentative bacteria (clostridia) are at least 3 orders of magnitude higher than those in cyanobacterial fermentations. Therefore, clostridia or other fermentative bacteria with a much more efficient [Fe-Fe] hydrogenase could possibly be coupled and exploited to degrade the cyanobacterial photosynthetically produced organic carbon for maximal H2 production.
The transfer of a hydrogenase which is insensitive to exposure to O2, either produced by genetic modification or taken from an alien organism, may facilitate but may not be obligatory for commercially acceptable rates of cyanobacterial H2 production. Genetic alterations of amino acids in the gas-substrate channel of hydrogenases changes their intramolecular gas transport kinetics (121). Substitutions of two amino acids at the end of the channel (valine and leucine, both with methionine) make [NiFe] hydrogenase O2 tolerant, as shown for the enzyme from Desulfovibrio fructosovorans (50). Similar genetic engineering of an [Fe-Fe] hydrogenase could be rewarding, since such an enzyme heterologously transferred to cyanobacteria could couple directly with ferredoxin and the photosynthetically generated reducing power while being insensitive to the photosynthetically produced O2. However, as pointed out previously (64), heterologous expression of any such genetically modified hydrogenase in a cyanobacterium also requires transcription of host-specific response regulators, and, as outlined above, transcription factors likely show a degree of specificity for cyanobacteria, as is evidenced for LexA- and AbrB-like proteins of bidirectional hydrogenase (159) (see above).
A realistic chance of improving H2 production by using either nitrogenase or hydrogenase lies in optimizing the photosynthetic electron flow for the generation of reductants, as outlined by the late David Hall and coworkers (90) some years ago. The light energy conversion efficiencies for H2 production in suspension cultures are only ca. 1 to 2% and thus very low (136). However, these values refer to the radiant energy incident on the cells rather than the energy absorbed, which is difficult to determine. These efficiencies can hardly be improved in dense cyanobacterial suspension cultures with self-shadowing effects. However, immobilization of cyanobacteria by adsorption on solid matrices or by entrapment in gels or polymers may enhance the functional lifetime of cells and may also increase the number of heterocysts in filamentous cyanobacteria. Indeed, immobilized cells were reported to show sustained high rates of H2 production (90, 134, 175) (Table 1). The light energy conversion efficiencies for H2 production may also be higher in immobilized cells than in suspension cultures. In addition, such an approach may enhance the lifetime of the cyanobacterial cells and thus may result in longer-lasting H2 production (90).
Sulfur deprivation leads to inactivation of photosystem II activity, resulting in anaerobiosis in the cultures and subsequently enhanced H2 production, as shown first for the green alga Chlamydomonas reinhardtii (145) and subsequently for cyanobacteria (4, 241). Cyanobacterial H2 production may also be augmented by altering the PSII/PSI ratio and by reducing the content of phycobilisome antennae in the cells (16). Both cyanobacterial hydrogenases are Ni enzymes. Cyanobacterial H2 production could also be altered by the supply of Ni to the cells (2, 10, 164, 169). Limitations could prevent synthesis of uptake hydrogenase, resulting in higher net H2 production from nitrogenases in the cells. Excess Ni could favor bidirectional hydrogenase synthesis and H2 production by this enzyme. In addition, culture conditions can be optimized for maximal cyanobacterial H2 production (40, 41).
Activity may also be increased by artificially enhancing the number of heterocysts within filaments and thus nitrogenase concentrations, e.g., by use of chemicals such as 7-azatryptophan (30) or by site-directed mutagenesis (144, 123). A high number of 600 to 1,000 genes are estimated to be specifically expressed in recently differentiated heterocysts (42, 133). The master gene controlling the expression of heterocysts is hetR, and their suppression is regulated by the patS and hetN gene products (38, 42, 236). Overexpression of the hetR gene leads to an enhancement of heterocyst frequency up to 29% in Anabaena (Nostoc) PCC 7120, but the remaining vegetative cells cannot perform CO2 fixation fast enough to meet the demand of the filaments for organic carbon and reductants (38). Research over the next several years following up in such directions will reveal whether cyanobacteria can ever be exploited for the realistic generation of new energies.
We are indebted to Gudrun Boison (Mariefred, Sweden) for helpful discussions and to Stefanie Junkermann (University of Cologne) for expert technical assistance with some of the experiments.
Almon, H., and P. Böger. 1988. Hydrogen metabolism of the unicellular cyanobacterium Chroococcidiopsis thermalis ATCC 29380. FEMS Microbiol. Lett. 49:445-449.
Almon, H., and P. Böger. 1984. Nickel-dependent uptake-hydrogenase activity in the blue-green alga Anabaena variabilis. Z. Naturforsch. 39:90-94.
Ananyev, G., D. Carrieri, and C. G. Dismukes. 2008. Optimization of metabolic capacity and flux through environmental cues to maximize hydrogen production by the cyanobacterium Arthospira (Spirulina) maxima. Appl. Environ. Microbiol. 74:6102-6113.
Antal, T. K., and P. Lindblad. 2005. Production of H2 by sulphur-deprived cells of the unicellular cyanobacteria Gloeocapsa alpicola and Synechocystis sp. PCC 6803 during dark incubation with methane or at various extracellular pH. J. Appl. Microbiol. 98:114-120.
Antal, T. K., P. Oliveira, and P. Lindblad. 2006. The bidirectional hydrogenase in the cyanobacterium Synechocystis sp. strain PCC 6803. Int. J. Hydrogen Energy 31:114-120.
Appel, J., S. Phunpruch, K. Steinmüller, and R. Schulz. 2000. The bidirectional hydrogenase of Synechocystis PCC 6803 works as an electron valve during photosynthesis. Arch. Microbiol. 173:333-338.
Appel, J., and R. Schulz. 1998. Hydrogen metabolism in organisms with oxygenic photosynthesis: hydrogenase as important regulatory devices for a proper redox poising? J. Photochem. Photobiol. Biol. 47:1-11.
Asada, Y., Y. Koike, J. Schnackenberg, M. Miyake, I. Uemura, and J. Miyake. 2000. Heterologous expression of clostridial hydrogenase in the cyanobacterium Synechococcus PCC7942. Biochim. Biophys. Acta 1490:269-278.
Awai, K., and C. Wolk. 2007. Identification of the glycosyl transferase required for synthesis of the the principal glycolipid characteristic of heterocysts of Anabaena sp strain PCC 7120. FEMS Microbiol. Lett. 266:98-102.
Axelsson, R., and P. Lindblad. 2002. Transcriptional regulation of Nostoc hydrogenases:effects of oxygen, hydrogen and nickel. Appl. Environ. Microbiol. 68:444-447.
Battchikova, N., and E. M. Aro. 2007. Cyanobacterial NDH-1 complexes: multiplicity in function and subunit composition. Physiol. Plant. 13:22-32.
Bauer, C. C., L. Scappino, and R. Haselkorn. 1993. Growth of the cyanobacterium Anabaena on molecular nitrogen: NifJ is required when iron is limited. Proc. Natl. Acad. Sci. U. S. A. 90:8812-8816.
Belkin, S., and E. Padan. 1978. Hydrogen metabolism in the facultative anoxygenic cyanobacteria (blue-green algae) Oscillatoria limnetica and Aphanothece halophytica. Arch. Microbiol. 116:109-111.
Bergman, B., J. R. Gallon, A. N. Rai, and L. J. Stal. 1997. N2 fixation by non-heterocystouis cyanobacteria. FEMS Microbiol. Rev. 19:139-185.
Berman-Frank, I., P. Lundgren, and P. Falkowski. 2003. Nitrogen fixation nad photosynthetic oxygen evolution in cyanobacteria. Res. Microbiol. 154:157-164.
Bernat, G., N. Waschewski, and M. Rögner. 2009. Towards efficient hydrogen production: the impact of antenna size and external factors on electron transport dynamics in Synechocystis PCC 6803. Photosynthesis Res. 99:205-216.
Betancourt, D. A., T. M. Loveless, J. Brown, and P. E. Bishop. 2008. Characterization of diazotrophs containing Mo-independent nitrogenases, isolated from diverse natural environments. Appl. Environ. Microbiol. 74:3471-3480.
Bishop, P., and R. D. Joerger. 1990. Genetics and molecular biology of alternative nitrogen fixing systems. Annu. Rev. Plant Physiol. Plant Mol. Biol. 41:109-125.
Bishop, P., and R. Premakur. 1992. Alternative nitrogen fixing systems, p. 736-762. In G. Stacey, R. H. Burris, and H. J. Evans (ed.), Biological nitrogen fixation. Chapman and Hill, New York, NY.
Böck, A., P. W. King, M. Blokesch, and M. C. Posewitz. 2006. Maturation of hydrogenases. Adv. Microbiol. Physiol. 51:1-71.
Böhme, H. 1998. Regulation of nitrogen fixation in heterocyst-forming cyanobacteria. Trends Plant Sci. 3:346-351.
Boison, G., H. Bothe, A. Hansel, and P. Lindblad. 1998. Evidence against a common use of the diaphorase subunits by the bidirectional hydrogenase and by respiratory complex I in cyanobacteria. FEMS Microbiol. Lett. 37:281-288.
Boison, G., H. Bothe, and O. Schmitz. 2000. Transcriptional analysis of hydrogenase genes in the cyanobacteria Anacystis nidulans and Anabaena variabilis monitored by RT-PCR. Curr. Microbiol. 40:315-321.
Boison, G., A. Mergel, H. Jolkver, and H. Bothe. 2004. Bacterial life and dinitrogen fixation at a gypsum rock. Appl. Environ. Microbiol. 70:7070-7077.
Boison, G., O. Schmitz, B. Schmitz, and H. Bothe. 1998. Unusual gene arrangement of the bidirectional hydrogenase and functional analysis of its diaphorase subunit HoxU in respiration of the unicellular cyanobacterium Anacystis nidulans. Curr. Microbiol. 36:253-258.
Boison, G., C. Steingen, L. J. Stal, and H. Bothe. 2006. The rice field cyanobacteria Anabaena azotica and Anabaena sp.CH1 express vanadium-dependent nitrogenase. Archiv. Microbiol. 186:367-376.
Borodin, V. B., A. Tsygankov, K. K. Rao, and D. O. Hall. 2000. Hydrogen production by Anabaena variabilis PK84 under simulated outdoor conditions. Biotechnol. Bioeng. 69:479-485.
Bothe, H. 1969. Ferredoxin als Kofaktor der cylischen Photophosphorylierung in einem zellfreien System aus der Blaualge Anacystis nidulans. Z. Naturforsch. 24b:1574-1582.
Bothe, H., E. Distler, and G. Eisbrenner. 1978. Hydrogen metabolism in blue-green algae. Biochimie 60:277-289.
Bothe, H., and G. Eisbrenner. 1977. Effect of 7-azatryptophan on nitrogen fixation and heterocyst formation in the blue-green alga Anabaena cylindrica. Biochem. Physiol. Pflanz. 133:323-332.
Bothe, H., and G. Eisbrenner. 1981. The hydrogenase-nitrogenase relationship in nitrogen-fixing organisms, p. 141-150. In H. Bothe and A. Trebst (ed.), Biology of inorganic nitrogen and sulfur. Springer, Berlin, Germany.
Bothe, H., P. Hemmerich, and H. Sund. 1971. Some properties of phytoflavin isolated from the blue-green alga Anacystis nidulans, p. 211-237. In H. Kamin (ed.), Flavins and flavoproteins. University Park Press-Butterworth, Baltimore, MD.
Bothe, H., and G. Neuer. 1988. Electron donation to nitrogenase in heterocysts. Methods Enzymol. 167:496-501.
Bothe, H., S. Winkelmann, and G. Boison. 2008. Maximizing hydrogen production by cyanobacteria. Z. Naturforsch. 63c:226-232.
Brito, B., C. Baginsky, J. M. Palacios, E. Cabrera, T. Ruiz-Arguesco, and J. Imperial. 2005. Biodiversity of uptake hydrogenase systems from legume endosymbiotic bacteria. Biochem. Soc. Trans. 33:33-35.
Brito, B., A. Toffanin, R. I. Prieto, J. Imperial, T. Ruiz-Arguesco, and R. M. Palacios. 2008. Host-dependent expression of Rhizobium leguminosarum bv viciae hydrogenase is controlled at the transcriptional and posttranslational levels in legume nodules. Mol. Plant Microbe Interact. 21:597-604.
Brusca, J. S., C. D. Carrasco, and J. W. Golden. 1989. Excision of an 11-kilobase-pair DNA element from within the nifD gene in Anabaena variabilis heteterocysts. J. Bacteriol. 171:4138-4145.
Buikema, W. J., and R. Haselkorn. 2001. Expression of the Anabaena hetR gene from a copper-regulated promoter leads to heterocyst differentiation under repressing conditions. Proc. Natl. Acad. Sci. U. S. A. 98:2729-2734.
Burns, R. C., and R. W. F. Hardy. 1975. Nitrogen fixation in bacteria and higher plants. Mol. Biol. Biochem. Biophys. 21:1-189.
Burrows, E. H., F. W. R. Chaplen, and R. L. Ely. 2008. Optimization of media nutrient composition for increased photofermentative hydrogen production by Synechocystis sp PCC 6803. Int. J. Hydrogen Energy 33:6092-6099.
Burrows, E. H., W. K. Wong, X. Fern, F. W. R. Chaplen, and R. L. Ely. 2009. Optimization of pH and nitrogen for enhanced hydrogen production by Synechocystis sp PCC 6803 via statistical and machine learning methods. Biotechnol. Prog. 25:1009-1017.
Callahan, S. M., and W. J. Buikema. 2001. The role of HetN in maintenance of the heterocyst pattern in Anabaena sp PCC 7120. Mol. Microbiol. 40:941-950.
Carrasco, C. D., S. D. Holliday, A. Hansel, P. Lindblad, and J. W. Golden. 2005. Heterocyst-specific excision of the Anabaena strain PCC 7120 hupL element requires xisC. J. Bacteriol. 187:6031-6038.
Chakraborty, B., and K. R. Samaddar. 1995. Evidence for the occurrence of an alternative nitrogenase system in Azospirillum brasilense. FEMS Microbiol. Lett. 127:127-131.
Chatterjee, R., R. M. Allen, P. W. Ludden, and V. K. Shah. 1996. Purification and characterization of the vnf-encoded aponitrogenase from Azotobacter vinelandii. J. Biol. Chem. 271:6819-6826.
Chatterjee, R., P. W. Ludden, and U. K. Shah. 1997. Characterization of VNFG, the delta subunit of the vnf-encoded apodinitrogenase from Azotobacter vinelandii: implications for its role in the formation of functional dinitrogenase 2. J. Biol. Chem. 272:3758-3765.
Chien, Y.-T., V. Auerbruch, A. D. Brabban, and S. H. Zinder. 2000. Analysis of genes encoding an alternative nitrogenase in the archaeon Methanosarcina barkeri 227. J. Bacteriol. 182:3247-3253.
Cockell, C. S., A. C. Schuerger, D. Billi, E. I. Friedmann, and C. Panitz. 2005. Effects of a simulated martian UV flux on the cyanobacterium, Chroococcidiopsis sp. 029. Astrobiology 5:127-140.
Cooley, C., C. A. Howitt, and W. F. J. Vermaas. 2000. Succinate:quinole oxidoreductases in the cyanobacterium Synechocystis sp. PCC 6803: presence and function in metabolism and electron transport. J. Bacteriol. 182:714722.
Cournac, L., A. L. De Lacey, A. Volbeda, C. Léger, B. Burlat, M. N., S. Champ, L. Martin, O. Sanganas, M. Haumann, V. M. Fernandez, B. Guigliarelli, J. Fontecilla-Camps, and M. Rousset. 2009. Introduction of methionines in the gas channel makes [NiFe] hydrogenase aero-tolerant. J. Am. Chem. Soc. 131:10156-10164.
Cournac, L., G. Guedeney, G. Peltier, and P. M. Vignais. 2004. Sustained photoevolution of molecular hydrogen in a mutant of Synechocystis sp. strain PCC 6803 deficient in the type I NADH-dehydrogenase complex. J. Bacteriol. 186:1737-1746.
Currati, L., E. Flores, and G. Salerno. 2002. Sucrose is involved in the diazotrophic metabolism of the heterocyst-forming cyanobacterium Anabaena sp. FEBS Lett. 513:175-178.
Deusch, O., G. Landan, M. Roettger, N. Gruenheit, K. V. Kowallik, J. F. Allen, W. Martin, and T. Dagan. 2008. Genes of cyanobacterial origin in plant nuclear genomes point to a heterocyst forming plastid ancestor. Mol. Biol. Evol. 25:748-761.
Devine, E., M. Holmquist, K. Stensjö, and P. Lindblad. 2009. Diversity and transcription of proteases involved in the maturation of hydrogenases in Nostoc punctiforme ATCC 29133 and Nostoc sp strain PCC 7120. BMC Microbiol. 9:1-19.
Diez, B., B. Bergman, and R. El-Shehawy. 2009. Marine diazotrophic cyanobacteria: out of the blue. Plant Biotechnol. 25:221-225.
Dilworth, M. J., R. R. Eady, R. L. Robson, and R. W. Miller. 1987. Ethane formation from acetylene as a potential test for vanadium nitrogenase in vivo. Nature 327:167-168.
Dilworth, M. J., and R. R. Eady. 1991. Hydrazine is a byproduct of dinitrogen reduction by the vanadium-nitrogenase from Azotobacter chroococcum. Biochem. J. 277:465-468.
Dixon, R. O. D. 1972. Hydrogenase in legume root nodule bacteroids, occurrence and properties. Arch. Microbiol. 85:193-201.
Eady, R. R. 1996. Structure-function relationship of alternative nitrogenase. Chem. Rev. 96:3013-3030.
Eilmus, S., C. Rösch, and H. Bothe. 2007. Prokaryotic life in a potash-polluted marsh with emphasis on N-metabolizing microorganisms. Environ. Pollut. 146:478-491.
Einsle, O., F. A. Tezcan, A. Andrade, B. Schmid, M. Yoshida, J. B. Howard, and D. C. Rees. 2002. Nitrogenase MoFe-protein at 1.16 Ångstrom resolution: a central ligand in the FeMo-cofactor. Science 297:1696-1700.
Elhira, S., and M. Ohmori. 2006. NrrA directly regulates expression of hetR during heterocyst differentiation in the cyanobacterium Anabaena sp. strain PCC 7120. J. Bacteriol. 188:8520-8525.
Emerson, S. R., and S. S. Huestedt. 1991. Ocean anoxia and the concentration of molybdenum and vanadium in seawater. Mar. Chem. 34:177-196.
English, C. M., C. Eckert, K. Brown, M. Seibert, and P. W. King. 2009. Recombinant and in vitro expression systems for hydrogenases. new frontiers in basic and applied studies for biological and synthetic H2 production. Dalton Trans. 45:9970-9978.
Evans, H. J., A. R. Harker, H. Papen, S. A. Russell, F. J. Hanus, and M. Zuber. 1987. Physiology, biochemistry, and genetics of the uptake hydrogenase in rhizobia. Annu. Rev. Microbiol. 41:335-361.
Fay, P. 1992. Oxygen relations of nitrogen fixation in cyanobacteria. Microbiol. Rev. 56:340-373.
Ferreira, D., F. A. L. Pinto, P. Morades-Ferreira, M. V. Mendez, and P. Tamagnini. 2009. Transcription profiles of hydrogenase related genes in the cyanobacterium Lyngbya majuscula CCAP 1446/4. BMC Microbiol. 9:67.
Ferreira, D., L. J. Stal, P. Moradas-Ferreira, M. V. Mendez, and P. Tamagnini. 2009. The relation between N2-fixation and H2-metabolism in the marine filamentous nonheterocystous cyanobacterium Lyngbya aestuarii CCY 9616. J. Phycol. 45:896-905.
Fewer, D., T. Friedl, and B. Büdel. 2002. Chroococcidiopsis and heterocysts-differentiating cyanobacteria are each other's closest living relatives. Mol. Phylogenet Evol. 23:82-90.
Fisher, K., D. J. Lowe, P. Tavares, A. S. Pereira, B. H. Huynh, D. Edmondson, and W. E. Newton. 2007. Conformations generated during turnover of the Azotobacter vinelandii MoFe protein and their relationship to physiological function. J. Inorg. Biochem. 101:1649-1656.
Floener, L., and H. Bothe. 1980. Nitrogen fixation in Rhopalodia gibba, a diatom containing blue-greenish inclusions symbiotically, p. 541-552. In W. Schwemmler and H. E. A. Schenk (ed.), Endocytobiology, endosymbiosis and cell biology. Walter de Gruyter & Co, Berlin, Germany.
Flores, E., A. Herrero, C. P. Wolk, and I. Maldener. 2006. Is the periplasm continuous in filamentous cyanobacteria? Trends Microbiol. 14:439-443.
Fontecilla-Camps, J., P. Amara, C. Cavazzo, Y. Nicolet, and A. Volbeda. 2009. Structure-function relationship of anaerobic gas-processing metalloenzymes. Nature 460:814-822.
Foster, R., and J. Zehr. 2006. Characterization of diatom-cyanobacteria symbioses on the basis of nifH, hetR and 16S rRNA sequences. Environ. Mirobiol. 8:1913-1925.
Friedrich, B., and E. Schwartz. 1993. Molecular biology of hydrogen utilization in aerobic chemolithotrophs. Annu. Rev. Microbiol. 47:351-383.
Galagan, J. E., C. Nusbaum, A. Roy, et al. 2002. The genome of Methanosarcina acetivorans reveals extensive metabolic and physiological diversity. Genome 12:532-542.
Gantar, M. S. 2008. Microalgae and cyanobacteria: food for thought. J. Phycol. 44:260-268.
Garlick, S., A. Oren, and E. Padan. 1977. Occurrence of facultative anoxygenic photosythesis among filamentous and unicellular cyanobacteria. J. Bacteriol. 129:623-629.
Gebler, A., T. Burgdorf, A. L. De Lacey, O. Rüdiger, A. Martinez-Arias, O. Lenz, and B. Friedrich. 2007. Impact of alterations near the [NiFe] active site on the function of the H2 sensor from Ralstonia eutropha. FEBS J. 274:74-85.
Geitler, L. 1977. Zur Entwicklungsgeschichte der Epithemiaceae Epithemia, Rhopalodia und Denticula (Diatomophyceae) und ihre vermutlichen symbiontischen Späroidkörper. Plant Syst. Evol. 128:265-275.
Ghirardi, M. L., A. Dubini, J. P. Yu, and P. Maness. 2009. Photobiological hydrogen-producing systems. Chem. Soc. Rev. 38:52-61.
Ghirardi, M. L., M. C. Posewitz, P. C. Maness, A. Dubini, J. Yu, and M. Seibert. 2007. Hydrogenases and hydrogen photoproduction in oxygenic photosynthetic organisms. Annu. Rev. Plant Biol. 58:71-91.
Giddings, J. W., and L. A. Staehelin. 1978. Plasma membrane architecture of Anabaena cylindrica: occurrence of microplasmodesmata and changes associated with heterocyst development and the cell cycle. Eur. J. Cell Biol. 16:235-249.
Golden, J. W., and H. S. Yoon. 2003. Heterocyst development in Anabaena. Curr. Opin. Microbiol. 6:557-563.
Grimme, R. A., C. E. Lubner, D. A. Bryant, and J. H. Golbeck. 2008. Photosystem/molecular wire/metal nanoparticle biconjugates for the photocatalytic production of H2. J. Am. Chem. Soc. 130:6308-6309.
Gupta, M., and N. G. Carr. 1981. Enzyme activities related to cyanophycin metabolism in heterocysts and vegetative cells of Anabaena spp. J. Gen. Microbiol. 125:17-23.
Gutekunst, K., S. Phunpruch, C. Schwarz, S. Schuchardt, R. Schulz-Friedrich, and J. Appel. 2005. LexA regulates the bidirectional hydrogenase in the cyanobacterium Synechocystis sp. PCC 6803. Mol. Microbiol. 58:810-823.
Gutthann, F., M. Egert, A. Marques, and J. Appel. 2007. Inhibition of respiration and nitrate assimlation enhances photohydrogen evolution under low oxygen concentrations in Synechocystis sp PCC 6803. Biochim. Biophys. Acta 1767:161-169.
Hagemann, M., R. Jeanjean, S. Fulda, M. Havaux, F. Joset, and N. Erdmann. 1999. Flavodoxin accumulation contributes to enhanced cyclic electron flow around photosystem I in salt stressed cells of Synechocystis sp strain PCC 6803. Physiol. Plant. 105:670-678.
Hall, D. O., S. A. Markov, Y. Watanabe, and K. K. Rao. 1995. The potential applications of cyanobacterial photosythesis for clean technologies. Photosynth. Res. 46:159-167.
Happe, T., A. Hemschemeier, M. Winkler, and A. Kaminski. 2002. Hydrogenases in green algae. Do they save the algae's life and solve our energy problems? Trends Plant Sci. 7:246-250.
Happe, T., K. Schütz, and H. Böhme. 2000. Transcriptional and mutational analysis of the uptake hydrogenase of the filamentous cyanobacterium Anabaena variabilis. J. Bacteriol. 182:1624-1631.
Haselkorn, R. 1992. Developmentally regulated gene rearrangements in prokaryotes. Annu. Rev. Genet. 26:113-130.
Haselkorn, R. 2005. Heterocyst differentiation and nitrogen fixation in Anabaena, p. 65-68. In Y.-P. Wang, M. Lin, Z.-X. Tian, C. Elmerich, and W. E. Newton (ed.), Biological nitrogen fixation, sustainable agriculture and the environment, Proceedings of the 14th International Nitrogen Fixation Congress. Springer, Dordrecht, Netherlands.
Hemrika, W., R. Renirie, H. L. Dekker, P. Barnett, and R. Wever. 1997. From phosphatases to vanadium peroxidases. A similar architecture of the active site. Proc. Natl. Acad. Sci. U. S. A. 94:2145-2149.
Henson, B. J., L. E. Pennington, L. E. Watson, and S. R. Barnum. 2008. Excision of the nifD element in heterocystous cyanobacteria. Arch. Microbiol. 189:357-366.
Herrero, A., A. M. Muro-Pastor, and E. Flores. 2001. Nitrogen control in cyanobacteria. J. Bacteriol. 183:411-425.
Holmqvist, M., K. Stensjö, P. Oliveira, P. Lindberg, and P. Lindblad. 2009. Characterization of the hupLS promoter activity in Nostoc punctiforme ATCC 29133. BMC Microbiol. 9:54.
Houchins, J. P. 1984. The physiology and biochemistry of hydrogen metabolism in cyanobacteria. Biochim. Biophys. Acta 768:227-255.
Houchins, J. P., and R. H. Burris. 1981. Comparative characterization of two distinct hydrogenases from Anabaena sp. strain 7120. J. Bacteriol. 146:215-221.
Houchins, J. P., and R. H. Burris. 1981. Occurrence and localization of two distinct hydrogenases in the heterocystous cyanobacterium Anabaena sp. strain 7120. J. Bacteriol. 146:209-214.
Howitt, C. A., and W. F. J. Vermaas. 1997. Analysis of respiratory mutants of Synechococcus 6803, p. 36. Abstr. IX Symp. Photosynth. Prokaryotes. The Vienna Academy of Sciences, Vienna, Austria.
Ihara, M., H. Nakamoto, T. Kamachi, I. Okura, and M. Maeda. 2006b. Light-driven hydrogen production by a hydrid complex of a [NiFe]-hydrogenase and the cyanobacterial photosystem I. Photochem. Photobiol. 82:1677-1685.
Ihara, M., H. Nishihara, K.-S. Yoon, O. Lenz, B. Friedrich, H. Nakamoto, K. Kojima, D. Honma, T. Kamachi, and I. Okura. 2006. Light-driven production by a hybrid complex of a [NiFe]-Hydrogenase and the cyanobacterial photosystem I. Photochem. Photobiol. 82:676-682.
Ishii, A., and Y. Hihara. 2008. An AbrB-like transcriptional regulator, SII0822, is essential for the activation of nitrogen-regulated genes in Synechocystis sp. PCC6803. Plant Physiol. 148:660-670.
Jolkver, H. 2005. Untersuchung zur Funktion der bidirektionalen Hydrogenase als nitrogenaseschützendes Enzym im einzelligen Cyanobakterium Chroococcidiopsis. Thesis. The University of Cologne, Cologne, Germany.
Jones, K. M., W. J. Buikema, and R. Haselkorn. 2003. Heterocyst-specific expression of patB, a gene required for nitrogen fixation in Anabaena strain PCC 7120. J. Bacteriol. 185:2306-2314.
Kellers, P., H. Ogata, and W. Lubitz. 2008. Purification. crystallization and preliminary X-ray analysis of the membrane-bound [NiFe] hydrogenase from Allochromatium vinosum. Acta Crystallogr. F 64:719-722.
Kentemich, T., M. Bahnweg, F. Mayer, and H. Bothe. 1989. Localization of the reversible hydrogenase in cyanobacteria. Z. Naturforsch. 44c:384-391.
Kentemich, T., M. Casper, and H. Bothe. 1991. The reversible hydrogenase in Anacystis nidulans is a component of the cytoplasmic membrane. Naturwisssenschaften 78:559-560.
Kentemich, T., G. Danneberg, B. Hundeshagen, and H. Bothe. 1988. Evidence for the occurrence of the alternative, vanadium-containing nitrogenase in the cyanobacterium Anabaena variabilis. FEMS Microbiol. Lett. 51:19-24.
Kentemich, T., G. Haverkamp, and H. Bothe. 1991. The expression of a third nitrogenase in the cyanobacterium Anabaena variabilis. Z. Naturforsch. 46c:217-111.
Kessler, P. S., J. McLarnan, and J. A. Leigh. 1997. Nitrogenase phylogeny and the molydenum dependence of nitrogen fixation in Methanococcus maripaludis. J. Bacteriol. 179:541-543.
Khan, Z., P. Bhadouria, and P. Bisen. 2005. Nutritional and therapeutic potential of Spirulina. Curr. Pharm. Biotechnol. 6:373-379.
Khudyakov, I. Y., and J. W. Golden. 2004. Different functions of HetR, a master regulator of heterocyst differentiation in Anabaena sp. PCC 7120, can be separated by mutation. Proc. Natl. Acad. Sci. U. S. A. 101:16040-16045.
Kiss, E., P. B. Kos, and I. Vass. 2009. Transcriptional regulation of the bidirectional hydrogenase in the cyanobacterium Synechocystis 6803. J. Biotechnol. 142:31-37.
Kitajima, S., F. Hashihama, and S. Takeda. 2009. Latitudinal distribution of diazotrophs and their nitrogen fixation in the tropical and subtropical western North Pacific. Limnol. Oceanogr. 54:537-547.
Koene-Cottaar, F. H. M., and G. Schraa. 1998. Anaerobic reduction of ethene to ethane in an enrichment culture. FEMS Microbiol. Ecol. 25:251-256.
Laczkó, I. 1986. Appearance of a reversible hydrogenase activity in Anabaena cylindrica grown in high light. Physiol. Plant 67:634-637.
Leach, C. K., and N. G. Carr. 1971. Pyruvate:ferredoxin oxidoreductase and its activation by ATP in the blue-green alga Anabaena variabilis. Biochim. Biophys. Acta 245:165-174.
Leroux, F., S. Dementin, B. Burlat, L. Cournac, A. Volbeda, S. Champ, L. Martin, B. Guigliarelli, P. Bertrand, J. Fontecilla-Camps, M. Rousset, and C. Léger. 2008. Experimental approaches to kinetics of gas diffusion in hydrogenase. Proc. Natl. Acad. Sci. U. S. A. 105:11188-11193.
Li, J. H., S. Laurent, V. Konde, S. Bedu, and C. C. Zhang. 2003. An increase in the level of oxoglutarate promotes heterocyst development in the cyanobacterium Anabaena sp. strain 7120. Microbiol. 149:3257-3263.
Liang, C.-M., M. Ekman, and B. Bergman. 2004. Expression of cyanobacterial genes involved in heterocyst differentiation and dinitrogen fixation along a plant symbiosis development profile. Mol. Plant Microbe Interact. 17:436-443.
Lindal, M., and F. J. Florencio. 2003. Thioredoxin-linked processes in cyanobacteria are as numerous as in chloroplasts, but targets are different. Proc. Natl. Acad. Sci. U. S. A. 100:16107-16112.
Lindblad, P., K. Christensson, P. Lindberg, A. Federov, G. Pinto, and A. Tsygankov. 2002. Photoproduction of H2 by wildtype Anabaena PCC 7120 and a hydrogenase deficient mutant: from laboratory experiments to outdoor culture. Int. J. Hydrogen Energy 27:1271-1281.
Liu, J. G., V. E. Bukatin, and A. A. Tsygankov. 2006. Light energy conversion into H2 by Anabaena variabilis mutant PK84 dense cultures exposed to nitrogen limitations. Int. J. Hydrogen Energy 31:1591-1596.
Long, M., J. Liu, Z. Chen, B. Bleijlevens, W. Roseboom, and S. P. Albracht. 2007. Characterization of a HoxEFUYH type of [NiFe] hydrogenase from Allochromatium vinosum and some EPR and IR properties of the hydrogenase module. J. Biol. Inorg. Chem. 12:62-78.
Lopez-Gomollon, S., J. A. Hernandez, S. Pellicer, V. E. Angarica, M. L. Peleato, and M. F. Fillat. 2007. Cross-talk between iron and nitrogen regulatory networks in Anabaena (Nostoc) PCC 7120: identification of the overlapping genes in FurA and Ntc regulons. J. Mol. Biol. 374:267-281.
Loveless, T. M., and P. E. Bishop. 1999. Identification of genes unique to Mo-independent nitrogenase systems in diverse diazotrophs. Can. J. Microbiol. 45:312-317.
Loveless, T. M., J. R. Saah, and P. E. Bishop. 1999. Isolation of nitrogen-fixing bacteria containing molybdenum-independent nitrogenases from natural environments. Appl. Environ. Microbiol. 65:4223-4225.
Lubner, C. E., R. A. Grimme, D. A. Bryant, and J. H. Golbeck. 2010. Wiring photosystem I for direct solar hydrogen production. Biochemistry 49:404-414.
Ludwig, M., R. Schulz-Friedrich, and J. Appel. 2006. Occurrence of hydrogenases in cyanobacteria and anoxygenic photosynthetic bacteria: implications for the phylogenetic origin of cyanobacterial and algal hydrogenases. J. Mol. Evol. 63:758-768.
Lynn, M. E., J. A. Bantle, and J. D. Ownby. 1986. Estimation of gene expression in heterocysts of Anabaena variabilis by using DNA-RNA hybridization. J. Bacteriol. 167:940-946.
Madamwar, D., N. Garg, and V. Shah. 2000. Cyanobacterial hydrogen production. World J. Microbiol. Biotechnol. 16:757-767.
Maeda, T., G. Vardar, W. T. Self, and T. K. Wood. 2007. Inhibition of hydrogen uptake in Escherichia coli by expressing the hydrogenase from the cyanobacterium Synechocystis sp. PCC 6803. BMC Biotechnol. 7:25-37.
Markov, S. A., M. J. Bazin, and D. O. Hall. 1995b. The potential of using cyanobacteria in photobioreactors for hydrogen production. Adv. Biochem. Eng. Biotechnol. 52:61-86.
Markov, S. A., R. Lichtl, M. J. Bazin, and D. O. Hall. 1995. Hydrogen production and carbon dioxide uptake by immobilised Anabaena variabilis in a hollow fibre photobioreactor. Enzyme Microb. Biotechnol. 17:306-310.
Masepohl, B., K. Scholisch, K. Görlitz, C. Kiútski, and H. Böhme. 1997. The heterocyst-specific fdxH gene product of the cyanobacterium Anabaena sp. PCC 7120 is important but not essential for nitrogen fixation. Mol. Gen. Genet. 253:770-776.
Masukawa, H., M. Mochimaru, and H. Sakurai. 2002. Disruption of the uptake hydrogenase gene, but not of the bidirectional hydrogenase gene, leads to enhanced photobiological hydrogen production by the nitrogen-fixing cyanobacterium Anabaena sp. PCC 7120. Appl. Microbiol. Biotechnol. 58:618-624.
Masukawa, H., M. Mochimaru, and H. Sakurai. 2002. The hydrogenases and photobiological hydrogen production utilizing nitrogenase system in cyanobacteria. Int. J. Hydrogen Energy 27:1471-1474.
Masukawa, H., X. Zhang, E. Yamazaki, S. Iwata, K. Nakamura, M. Mochimaru, K. Inoue, and H. Sakurai. 2009. Survey of the distribution of different types of nitrogenases and hydrogenases in heterocyst-forming cyanobacteria. Mar. Biotechnol. 11:397-409.
Maynard, R. H., R. Premakur, and P. E. Bishop. 1994. Mo-independent nitrogenase 3 is advantageous for diazotrophic growth of Azotobacter vinelandii on solid medium containing molydenum. J. Bacteriol. 176:5583-5586.
Mazur-Marzec, H., and M. Plinski. 2009. Do toxic cyanobacteria pose a threat to the Baltic ecosystem? Oceanologia 51:293-313.
Meeks, J. C., and J. Elhai. 2002. Regulation of cellular differentiation in filamentous cyanobacteria in free-living and plant-associated symbiotic growth states. Microbiol. Mol. Biol. Rev. 66:94-121.
Melis, A., and T. Happe. 2001. Hydrogen production. Green algae as a source of energy. Plant Physiol. 127:740-748.
Meyer, J. 2007. [FeFe] hydrogenases and their evolution: a genomic perspective. Cell Mol. Life Sci. 64:1063-1084.
Mikheeva, L. E., O. Schmitz, S. V. Shestakov, and H. Bothe. 1995. Mutants of the cyanobacterium Anabaena variabilis altered in hydrogenase activities. Z. Naturforsch 50c:505-510.
Mulkidjanian, A. Y., E. V. Koonin, K. S. Makarova, S. L. Mekhedov, A. Sorokin, Y. I. Wolf, A. Dufresne., F. Partensky, H. Burd, D. Kaznadzey, R. Haselkorn, and M. Galperin. 2006. The cyanobacterial genome core and the origin of photosynthesis. Proc. Natl. Acad. Sci. U. S. A. 103:13126-13131.
Mullineaux, C. W., V. Mariscal, A. Nenninger, H. Khanum, A. Herrero, E. Flores, and D. G. Adams. 2008. Mechanisms of intercellular molecule exchange in heterocyst-forming cyanobaceria. EMBO J. 27:1299-1308.
Nakamura, Y., J. Takahashi, A. Sakurai, Y. Inaba, E. Suzuki, S. Nihei, S. Fujiwara, M. Tsuzuki, H. Myashita, H. Ikemoto, M. Kawachi, H. Sekiguchi, and N. Kurano. 2005. Some cyanobacteria synthesize semi-amylopectin type a-polyglucans instead of glycogen. Plant Cell Physiol. 46:539-546.
Neuer, G., and H. Bothe. 1982. The pyruvate:ferredoxin oxidoreductase in heterocyts of the cyanobacterium Anabaena cylindrica. Biochim. Biophys. Acta 716:358-365.
Neunuebel, M. R., and J. W. Golden. 2008. The Anabaena sp. strain 7120 gene all2874 encodes a diguanylate cyclase and is required for normal heterocyst development under high-light growth conditions. J. Bacteriol. 190:6829-6838.
Newton, W. E. 2007. Physiology, biochemistry and molecular biology of nitrogen fixation, p. 109-129. In H. Bothe, S. J. Ferguson, and W. E. Newton (ed.), Biology of the nitrogen cycle. Elsevier, Amsterdam, Netherlands.
Ni, C. V., A. F. Yakuninin, and I. N. Gogotov. 1990. Influence of molybdenum, vanadium, and tungsten on growth and nitrogenase synthesis of the free-living cyanobacterium Anabaena azollae. Microbiology 59:395-398.
Oda, Y., S. K. Samanta, F. E. Rey, L. Wu, X. Liu, T. Yan, J. Zhou, and C. S. Harwood. 2005. Functional genomic analysis of three nitrogenase isoenzymes in the photosynthetic bacterium Rhodopseudomonas palustris. J. Bacteriol. 187:7784-7794.
Ohki, K., and Y. Taniuchi. 2009. Detection of nitrogenase in individual cells of a natural population of Trichodesmium using immunocytochemical methods for fluorescent cells. J. Oceanogr. 65:427-432.
Oliveira, P., and P. Lindblad. 2008. An AbrB-like protein regulates the expression of the bidirectional hydrogenase in Synechocystis sp. strain 6803. J. Bacteriol. 190:1011-1019.
Oliveira, P., and P. Lindblad. 2005. LexA. A transcriptional regulator binding in the promoter region of the bidirectional hydrogenase in the cyanobacterium Synechocystis sp. strain PCC6803. FEMS Microbiol. Lett. 251:59-66.
Oliveira, P., and P. Lindblad. 2009. Transcriptional regulation of the cyanobacterial Hox-hydrogenase. Dalton Trans. 45:9990-9996.
Olmedo-Verd, E., E. Flores, A. Herrero, and A. M. Muro-Pastor. 2005. HetR-dependent and -independent expression of heterocyst-related genes in an Anabaena strain overproducing the NtcA transcription factor. J. Bacteriol. 187:1985-1991.
Olmedo-Verd, E., A. Valladares, E. Flores, A. Herrero, and A. M. Muro-Pastor. 2008. Role of two Ntc-binding sites in the complex ntcA gene promoter of the heterocyst-forming cyanobacterium Anabaena sp. strain 7120. J. Bacteriol. 190:7584-7590.
Ow, S. Y., T. Cardona, A. Taton, A. Magnuson, P. Lindblad, K. Stensjo, and P. C. Wright. 2008. Quantitative shotgun proteomics of enriched heterocysts from Nostoc sp. PCC 7120 using 8-plex isobaric peptide tags. J. Proteome Res. 7:1615-1626.
Ow, S. Y., J. Noirel, T. Cardona, A. Taton, P. Lindblad, K. Stensjö, and P. C. Wright. 2009. Quantitative overview of N2 fixation in Nostoc punctiforme ATCC 29133 through cellular enrichments and iTRAQ shotgun proteomics. J. Proteome Res. 8:187-198.
Papen, H., T. Kentemich, T. Schmülling, and H. Bothe. 1986. Hydrogenase activities in cyanobacteria. Biochimie 68:121-132.
Papen, H., G. Neuer, M. Refaian, and H. Bothe. 1983. The isocitrate dehydrogenase from cyanobacteria. Arch. Microbiol. 134:73-79.
Papen, H., G. Neuer, A. Sauer, and H. Bothe. 1986. Properties of the glyceraldehyde-3-P dehydrogenase in heterocysts and vegetative cells of cyanobacteria. FEMS Microb. Lett. 36:201-206.
Pau, R. N. 1991. The alternative nitrogenases, p. 37-57. In M. J. Dilworth and A. R. Glenn (ed.), Biology and biochemistry of nitrogen fixation. Elsevier, Amsterdam, Netherlands.
Pau, R. N., W. Klipp, and S. Steinkühler. 1997. Molydenum transport, processing and gene regulation, p. 217-234. In G. Winkelmann and C. J. Carrano (ed.), Transition metals in microbial metabolism. Harwood, Newark, NJ.
Pederson, D. M., A. Daday, and G. D. Smith. 1986. The use of Ni to probe the role of hydrogen metabolism in cyanobacterial nitrogen-fixation. Biochimie 68:113-120.
Petterson-Fortin, L. M., and G. W. Owttrim. 2008. A Synechocystis LexA-orthologue binds direct repeats in target genes. FEBS Lett. 582:2424-2430.
Pohorelic, B. K. J., J. K. Voordouw, E. Lojou, A. Dolla, J. Harder, and G. Voordouw. 2002. Effects of deletion of genes encoding Fe-only hydrogenase of Desulfovibrio vulgaris Hildenborough on hydrogen and lactate metabolism. J. Bacteriol. 184:679-686.
Pratte, B., K. Eplin, and T. Thiel. 2006. Cross functionality of nitrogenase components NifH1 and VnfH in Anabaena variabilis. J. Bacteriol. 188:5806-5811.
Prechtl, J., C. Kneip, P. Lockhart, K. Wenderoth, and U.-G. Maier. 2004. Intracellular spheroid bodies of Rhopalodia gibba have nitrogen-fixing apparatus of cyanobacterial origin. Mol. Biol. Evol. 21:1477-1481.
Rakhely, G., A. Kovacs, G. Maroti, B. D. Fodor, G. Csanadi, D. Latinovics, and K. L. Kovacs. 2004. Cyanobacterial-type, pentameric, NAD+-reducing NiFe hydrogenase in the purple sulfur photosynthetic bacterium Thiocapsa roseopersicina. Appl. Environ. Microbiol. 70:722-728.
Rashid, N., W. Song, J. Park, H. F. Jin, and K. Lee. 2009. Characteristics of hydrogen production by immobilized cyanobacterium Microcystis aeruginosa through cycles of photosynthesis and anaerobic incubation. J. Ind. Eng. Chem. 15:498-503.
Raymond, J., J. L. Siefert, C. R. Staples, and R. H. Blankenship. 2004. The natural history of nitrogen fixation. Mol. Biol. Evol. 21:541-554.
Rees, D. C., F. A. Tezcan, C. A. Haynes, M. A. Walton, A. Andrade, O. Einsle, and J. N. Howard. 2005. Structural basis of biological nitrogen fixation. Philos. Trans. R. Soc. Lond. A 363:971-984.
Ribbe, M., D. Gadkari, and O. Meyer. 1997. N2 fixation by Streptomyces thermoautotrophicus involves a molybdenum-dinitrogenase and a manganese-superoxide oxidoreductase that couple N2 reduction in the oxidation of superoxide produced from O2 by a molybdenum CO-dehydrogenase. J. Biol. Chem. 272:26627-26633.
Rivera-Ortiz, J. M., and R. H. Burris. 1975. Interactions among substrates and inhibitors of nitrogenase. J. Bacteriol. 123:537-545.
Rösch, C., and H. Bothe. 2009. Diversity of total, nitrogen-fixing and denitrifying bacteria in an acid forest soil. Eur. J. Soil Sci. 60:883-894.
Ruvkun, G. B., and F. M. Ausubel. 1980. Interspecies homology of nitrogenase genes. Proc. Natl. Acad. Sci. U. S. A. 77:191-195.
Scherer, S., W. Kerfin, and P. Böger. 1980. Increase of nitrogenase activity in the blue-green alga Nostoc muscorum (cyanobacterium). J. Bacteriol. 144:1017-1023.
Schindelin, H., C. Kisker, J. Schlessman, J. B. Howard, and D. C. Rees. 1997. Structure of ADP. AlF4− stabilized nitrogenase complex and its implications for signal transduction. Nature 387:370-376.
Schink, B. 1982. Isolation of a hydrogenase-cytochrome b complex from cytoplasmic membarnes of Xanthobacter autotrophicus GZ 29. FEMS Microbiol. Lett. 13:289-293.
Schlegel, H. G., and U. Eberhardt. 1972. Regulatory phenomena in the metabolism of Knallgas bacteria. Adv. Microbiol. Physiol. 7:205-242.
Schmitz, O., G. Boison, and H. Bothe. 2001. Quantitative analysis of two circadian clock-controlled gene clusters coding for the birectional hydrogenase in the cyanobacterium Synechoccus sp. PCC7942. Mol. Microbiol. 41:1409-1417.
Schmitz, O., G. Boison, R. Hilscher, B. Hundeshangen, W. Zimmer, F. Lottspeich, and H. Bothe. 1995. Molecular biological analysis of a directional hydrogenase from cyanobacteria. Eur. J. Biochem. 233:266-276.
Schmitz, O., G. Boison, H. Salzmann, H. Bothe, K. Schütz., S. Wang, and T. Happe. 2002. HoxE—a subunit specific for the pentameric bidirectional hydrogenase complex (HoxEFUYH) of cyanobacteria. Biochim. Biophys. Acta 1554:66-74.
Schmitz, O., and H. Bothe. 1996. The diaphorase subunit HoxU of the bidirectional hydrogenase as electron transferring protein in cyanobacterial respiration? Naturwissenschaften 83:525-527.
Schmitz, O., and H. Bothe. 1996. NAD(P)+-dependent hydrogenase activity in extracts from the cyanobacterium Anacystis nidulans. FEMS Microbiol. Lett. 135:97-101.
Schmitz, O., J. Gurke, and H. Bothe. 2001. Molecular evidence for the aerobic expression of nifJ, encoding pyruvate:ferredoxin oxidoreductase, in cyanobacteria. FEMS Microb. Lett. 195:97-102.
Schmitz, O., T. Kentemich, W. Zimmer, B. Hundeshagen, and H. Bothe. 1993. Identification of the nifJ gene coding for pyruvate:ferredoxin oxidoreductase in dinitrogen-fixing cyanobacteria. Arch. Microbiol. 160:62-67.
Schrautemeier, B., U. Neveling, and S. Schmitz. 1995. Distinct and differentially regulated Mo-dependent nitrogen-fixing systems evolved for heterocysts and vegetative cells of Anabaena variabilis ATCC 29413: characterization of the fdX1/2 gene regions as part of the nif1/2 gene clusters. Mol. Microbiol. 18:357-359.
Schütz, K., T. Happe, O. Troshina, P. Lindblad, E. Leitão, P. Oliveira, and P. Tamagnini. 2004. Cyanobacterial H2-production—a comparative analysis. Planta 218:350-359.
Seabra, R., A. Santos, S. Pereira, P. Monades-Ferreira, and P. Tamagnini. 2009. Immunolocalization of the uptake hydrogenase in the marine cyanobacterium Lyngbya majuscula CCAP 1446/4 and two Nostoc strains. FEMS Microbiol. Lett. 292:57-62.
Seefeldt, L. C., L. G. Dance, and D. R. Dean. 2004. Substrate interactions with nitrogenase: Fe versus Mo. Biochemistry 43:1401-1409.
Serebryakova, L. T., M. E. Sheremetiva, and P. Lindblad. 2000. H2-uptake and evolution in the unicellular cyanobacterium Chroococcidiopsis thermalis CALU 758. Plant Physiol. Biochem. 38:525-530.
Shah, G. R., R. Karunakaran, and G. N. Kumar. 2007. In vivo restriction endonuclease activity of the Anabaena PCC 7120 XisA protein in Escherichia coli. Res. Microbiol. 158:679-684.
Shestakov, S. V., and L. E. Mikheeva. 2006. Genetic control of hydrogen metabolism in cyanobacteria. Russian J. Genet. 42:1272-1284.
Shi, Y. M., W. X. Zhao, W. Zhang, Z. Ye, and J. D. Zhao. 2006. Regulation of intracellular free calcium concentration during heterocyst differentiation by HetR and NtcA in Anabaena sp PCC 7120. Proc. Natl. Acad. Sci. U. S. A. 103:11334-11339.
Shima, S., O. Pilak, S. Vogt, M. Schick, M. S. Stagni, W. Meyer-Klaucke, E. Warkentin, R. K. Thauer, and U. Ermler. 2008. The crystal structure of [Fe]-hydrogenase reveals the geometry of the active site. Science 321:572-575.
Short, S. M., and J. P. Zehr. 2005. Quantitative analysis of nifH genes and transcripts from aquatic environments. Methods Enzymol. 397:380-394.
Simpson, F. B., and R. H. Burris. 1984. A nitrogen pressure of 50 atmospheres does not prevent evolution of hydrogen by nitrogenase. Science 224:105-1097.
Simpson, F. B., R. J. Maier, and H. J. Evans. 1979. Hydrogen-stimulated CO2-fixation and coordinate induction of hydrogenase and ribulosebiphosphate carboxylase in a H2-uptake positive strain of Bradyrhizobium japonicum. Arch. Microbiol. 123:1-8.
Smith, L. A., S. Hill, and M. G. Yates. 1976. Inhibition by acetylene of conventional hydrogenase in nitogen-fixing bacteria. Nature 262:209-210.
Stal, L. J. 2009. Is the distribution of nitrogen-fixing cyanobacteria in the oceans related to temperature? Environ. Mirobiol. 11:1632-1645.
Stal, L. J., and R. Mozelaar. 1997. Fermentation in cyanobacteria. FEMS Microbiol. Rev. 21:179-211.
Stanier, R. Y., and G. Cohen-Bazire. 1977. Phototrophic prokaryotes: the cyanobacteria. Annu. Rev. Microbiol. 31:225-274.
Stewart, W. D. P., and M. Lex. 1970. Nitrogenase activity in the blue-green alga Plectonema boryanum. Archiv. Mikrob. 73:250-260.
Stripp, S. T., and T. Happe. 2009. How algae produce hydrogen—news from the photosynthetic hydrognase. Dalton Trans. 45:9960-9969.
Takeshi, H., K. Ataka, O. Pilak, S. Vogt, M. S. Stagni, W. Meyer-Klaucke, E. Warkentin, R. K. Thauer, S. Shima, and U. Ermler. 2009. The crystal structure of C176 mutated [Fe]-hydrogenase suggests an acyl-iron ligation in the active site iron complex. FEBS Lett. 583:585-590.
Tamagnini, P., R. Axelsson, P. Lindberg, F. Oxelfelt, R. Wünschiers, and P. Lindblad. 2002. Hydrogenases and hydrogen metabolism of cyanobacteria. Microbiol. Mol. Biol. Rev. 66:1-20.
Tamagnini, P., J.-L. Costa, L. Almeida, M.-J. Oliveira, R. Salema, and P. Lindblad. 2000. Diversity of cyanobacterial hydrogenases, a molecular approach. Curr. Microbiol. 40:356-361.
Tamagnini, P., E. Leitão, P. Oliveira, D. Ferreira, F. A. L. Pinto, D. J. Harris, T. Heidorn, and P. Lindblad. 2007. Cyanobacterial hydrogenases diversity, regulation and application. FEMS Microbiol. Rev. 31:692-720.
Tel-Or, E., L. W. Luijk, and L. Packer. 1977. An inducible hydrogenase in cyanobacteria enhances N2-fixation. FEBS Lett. 78:49-53.
Thiel, T. 1993. Characterization of genes for an alternative nitrogenase in the cyanobacterium Anabaena variabilis. J. Bacteriol. 175:6276-6286.
Thiel, T., E. M. Lyons, J. Erker, and A. Ernst. 1995. A second nitrogenase in vegetative cells of a heterocyst-forming cyanobacterium. Proc. Natl. Acad. Sci. U. S. A. 92:9358-9362.
Thomas, J.-C., B. Ughy, B. Lagoutte, and G. Ajlani. 2006. A second isoform of ferredoxin:NADP oxidoreductase generated by an in-frame initiation of translation. Proc. Natl. Acad. Sci. U. S. A. 103:18368-18373.
Thomas, J., J. C. Meeks, C. P. Wolk, P. W. Shaffer, S. M. Austin, and W.-S. Chien. 1977. Formation of glutamine from [13N]ammonia and [13N]dinitrogen, and [14C]glutamate by heterocysts isolated from Anabaena cylindrica. J. Bacteriol. 129:1545-1155.
Thorneley, R. N. F., and D. J. Lowe. 1985. Kinetics and mechanism of the nitrogenase enzyme system, p. 222-284. In T. Spiro (ed.), Molydenum enzymes. J. Wiley, New York, NY.
Trebst, A., and H. Bothe. 1966. Zur Rolle des Phytoflavins im photosynthetischen Elektronentransport. Ber. Dtsch. Bot. Ges. 79:44-47.
Tsygankov, A. 2007. Nitrogen fixing cyanobacteria: a review. Appl. Biochem. Microbiol. 43:250-259.
Valladares, A., E. Flores, and A. Herrero. 2008. Transcription activation by NtcA and 2-oxoglutarate of three genes involved in heterocyst differentiation in the cyanobacterium Anabaena sp. strain PCC 7120. J. Bacteriol. 190:126-6133.
Van der Oost, J., B. A. Builthuis, S. Feitz, K. Krab, and R. Kraayenhof. 1989. Fermentation metabolism of the unicellular cyanobacterium Cyanothece PCC 7822. Arch. Microbiol. 151:415.419.
Van Lin, B., and H. Bothe. 1972. Flavodoxin from Azotobacter vinelandii. Arch. Microbiol. 82:155-172.
Vignais, P. M., and B. Billoud. 2007. Occurrence, classification and biological function of hydrogenases: an overview. Chem. Rev. 107:4206-4272.
Vignais, P. M., B. Billoud, and J. Meyer. 2001. Classification and phylogeny of hydrogenases. FEMS Microbiol. Rev. 25:455-501.
Vignais, P. M., and A. Colbeau. 2004. Molecular biology of microbial hydrogenases. Curr. Issues Mol. Biol. 6:159-188.
Vogt, S., E. J. Lyon, S. Shima, and R. K. Thauer. 2008. The exchange activities of [Fe] hydrogenase (iron-sulfur-cluster free hydrogenase) from methanogenic archaea in comparison with the exchange activities of [FeFe] and [NiFe] hydrogenases. J. Biol. Inorg. Chem. 13:97-106.
Walsby, A. E. 2007. Cyanobacterial heterocysts. terminal pores proposed as sites of gas exchange. Trends Microbiol. 15:340-349.
Weyman, P. D., B. Pratte, and T. Thiel. 2008. Transcription of hupSL in Anabaena variabilis ATTC 29143 is regulated by NtcA and not by hydrogen. Appl. Environ. Microbiol. 74:2103-2110.
Willstätter, R., and A. Stoll. 1918. Untersuchungen über die Assimilation der Kohlensäure. Sieben Abhandlungen. Springer, Berlin, Germany.
Wünschiers, R., M. Batur, and P. Lindblad. 2003. Presence and expression of hydrogenase specific C-terminal endopeptidases in cyanobacteria. BMC Microbiol. 3:8-20.
Yang, T., N. Maeser, M. Laryukhin, H. I. Lee, D. R. Daen, L. C. Seefeldt, and B. M. Hoffman. 2005. The interstitial atom of the nitrogenase FeMo-cofactor: Endor and ESEEM evidence that it is not a nitrogen. J. Am. Chem. Soc. 127:12804-12805.
Yates, M. G. 1972. Electron-transport to nitrogenase in Azotobacter chroococcum—Azotobacter flavodoxin hydroquinone as an electron donor. FEBS Lett. 27:63-67.
Yoon, H. S., and J. W. Golden. 1998. Heterocyst pattern formation controlled by a diffusible peptide. Science 282:935-938.
Zehr, J. P., S. R. Bench, B. J. Carter, I. Hewson, F. Niazi, T. Shi, H. J. Tripp, and J. P. Affourtit. 2008. Globally distributed uncultivated oceanic N2-fxing cyanobacteria lack oxygenic photosystem II. Science 322:1110-1112.
Zehr, J. P., S. R. Bench, E. A. Mondragon, J. McCarren, and E. F. DeLong. 2007. Low genomic diversity in tropical oceanic N2-fixing cyanobacteria. Proc. Natl. Acad. Sci. U. S. A. 104:17807-17812.
Zhang, C. C., S. Laurent, S. Sakr, L. Peng, and S. Bedu. 2006. Heterocyst differentiation and pattern formation in cyanobacteria. a chorus of signals. Mol. Microb. 59:367-375.
Zhang, C. C., H. Pu, Q. S. Wang, S. Cheng, W. X. Zhao, Y. Zhang, and J. D. Zhao. 2007. PII is important in regulation of nitrogen metabolism but not required for heterocyst formation in the cyanobacterium Anabaena sp PCC 7120. J. Biol. Chem. 282:33641-33648.
Zhang, Z., N. D. Pendse, K. N. Phillips, J. B. Cotner, and A. Khodursky. 2008. Gene expression patterns of sulfur starvation in Synechocystis sp. PCC 6803. BMC Genomics 9:344-354.
Zhao, Y., S. M. Bian, H. N. Zhou, and J. F. Huang. 2006. Diversity of nitrogenase systems in diazotrophs. J. Integrative Plant Biol. 48:745-755.
Hermann Bothe received his Ph.D. from Göttingen University and his habilitation from Bochum University. He was Professor of botany and microbiology at the University of Cologne, Germany, from 1978 and is now retired. As a student of A. Trebst, Göttingen/Bochum, Germany, he started to work on photosynthetic electron transport before he switched to nitrogen fixation, both in cyanobacteria. He also studied aspects of nitrogen fixation by associative bacteria, denitrification, arbuscular mycorrhiza, and heavy metal and salt resistance in plants. He has almost 200 publications in refereed journals.
Oliver Schmitz studied biology in Cologne, Germany, with his main focus on botany, genetics, and biochemistry, and completed his diploma thesis on arbuscular mycorrhiza in 1991. In the course of his dissertation in the laboratory of Professor Bothe, he specialized in hydrogen metabolism in cyanobacteria and obtained his Ph.D. in 1995 by characterizing the bidirectional hydrogenase in unicellular and in N2-fixing cyanobacteria by means of protein purification and applying molecular biology, resulting in the first identification of cyanobacterial hydrogenase genes at that time. He worked as postdoctoral fellow in Susan Golden's group at Texas A&M University, performing research on photosynthesis and the circadian clock in cyanobacteria. In 2001, he joined Metanomics GmbH, a BASF Plant Science company specialized in applying metabolomics in the fields of plant biotechnology, pharmacology, diagnostics, and toxicology. Currently, he is member of the management team and head of the Data Interpretation Health group at Metanomics.
M. Geoffrey Yates received his B.Sc. from the University College of North Wales, Bangor, United Kingdom, and his Ph.D. from the University of Nottingham, United Kingdom, and then was Research Associate at Unilever Research Colworth House, Bedford, United Kingdom, at the Biochemistry Department of John Hopkins University, Baltimore, MD, and then at the Department of Biochemistry of Oxford University. For almost 30 years, he was Principal Scientific Officer at the BBSRC Unit of Nitrogen Fixation, University of Sussex, United Kingdom. For the last 15 years, he was Visiting Research Fellow at the Department of Biochemistry and Molecular Biology, Federal University of Paranã, Curita, Brazil. In recent years he worked on nitrogen fixation and hydrogen uptake in Azotobacter chroococcum, Azospirillum brasilense, and Herbaspirillum seropedicae.
William E. Newton received his B.Sc. from the Nottingham University and his Ph.D from London University (both in the United Kingdom), and he then spent a postdoctoral year at Harvard before spending 15 years at the Charles F. Kettering Research Laboratory in Yellow Springs, OH, as a member of its nitrogen fixation group. He then became Research Leader for Plant Productivity at the Western Regional Research Center (USDA-ARS) in Berkeley, CA, where he was awarded the USDA Certificate of Merit. He also served as Adjunct Professor at UC-Davis. In 1990, he moved to Virginia Polytechnic Institute and State University (Virginia Tech) as Director of the Biotechnology Center and Professor of Biochemistry. He later served as head of both the Biochemistry Department and the Department of Anaerobic Microbiology. He was elected Fellow of the Royal Society of Chemistry in 1992 and Fellow of the American Association for the Advancement of Science in 1996.
Microbiology and Molecular Biology Reviews Nov 2010, 74 (4) 529-551; DOI: 10.1128/MMBR.00033-10
Thank you for sharing this Microbiology and Molecular Biology Reviews article.
You are going to email the following Nitrogen Fixation and Hydrogen Metabolism in Cyanobacteria
Message Subject (Your Name) has forwarded a page to you from Microbiology and Molecular Biology Reviews
Message Body (Your Name) thought you would be interested in this article in Microbiology and Molecular Biology Reviews. | CommonCrawl |
De Bruijn Sequence and Universal Cycle Constructions
DB sequences
Graph model and Euler cycles LFSRs Recursive Greedy Shift rules (binary) Shift rules ($k$-ary) Concatentation schemes Decoding (rank/unrank) Properties
Shift rules Concatenation schemes Multiset permutations
Weak orders
Cutting down
Orientable sequences
Recursive Construction Output
Type Orientable sequence Aperiodic orientable sequence
Order $n$ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
An orientable sequence or orientable universal cycle is a UC of order $n$ for a set $\mathbf{S}$ that does not contain both a string and its reversal, i.e., if $\mathbf{S}$ contains $w$ then it does not contain its reversal $w^R$ [DMRW93,MW21]. Thus $\mathbf{S}$ necessarily excludes palindromes.
Orientable UCs do not exist for $n<5$, and 001101 is a longest orientable UC for $n=5$. Lempel's $D$-morphism can be applied to recursively construct orientable UCs with length at least 63% of the trivial upper bound $2^{n-1} - 2^{\lfloor (n-1)/2\rfloor}$. Tighter bounds are known as given in the table below [DMRW93].
Recursive orientable UC construction
Let $\mathcal{O}_n$ denote an orientable UC of order $n$ and length $m$ with odd weight and the following property: it has exactly one substring $0^{n-4}$. Let $\mathcal{O}_{n+1}$ be $D^{-1}(\mathcal{O_n})$ replacing the unique substring $1^{n+1-4}$ with $1^{n+1-3}$ when its weight is even. The resulting orientable UC has length $2m$ or $2m{+}1$, has odd weight, and has the property of having exactly one substring $0^{n+1-4}$.
Let $\mathcal{O}_6 = \underline{00}1010111$. Then applying $D^{-1}$ yields $\mathcal{O}_7 = \underline{000}110010111001101$ which has odd weight. Applying $D^{-1}$ again yields 000010001101000100111101110010111011 which has even weight so the unique substring 1111 is replaced with 11111 to yield $$\mathcal{O}_8 = \underline{0000}100011010001001111101110010111011.$$
The implementation used above uses an orientable sequence of order $n=8$ of length 80 as a base case for recursively computing larger orders. Brute force found optimal length orientable sequences for $n<8$. Below is a table of the length of orientable sequences constructed compared to the upper bound from [DMRW93] or from exhaustive search.
Aperiodic orientable sequences
The linear relative of orientable UCs are known as aperiodic orientable sequences (or aperiodic 2-orientable window sequences in [BM93]). Trivially, they can be obtained from an orientable UC by copying the length $n{-}1$ prefix and appending it to the end, and a trivial upper bound for their maximal length is $2^{n-1} - 2^{\lfloor (n-1)/2\rfloor} + (n{-}1)$. Longer constructions that have length at least 2/3 optimal can also be constructed by applying the properties of the $D$-morphism [MW21].
Recursive aperiodic orientable sequence construction
Let $\mathcal{A}_n$ denote an aperiodic orientable sequence of order $n$ and length $m$ with the following property: it begins with $0^{n-1}$ and ends with $1^{n-1}$. Let $D^{-1}(\mathcal{A_n}) = \{U,\overline{U} \}$, where $U$ begins with 0. Construct $\mathcal{A}_{n+1}$ by joining $U$ with the reversal of $\overline{U}$ overlapping the alternating suffix of the former with the alternating prefix of the latter. When $n$ is even, the overlap has length $n$; when $n$ is odd, the overlap has length $n{-}1$. $\mathcal{A}_{n+1}$ retains the property of beginning with $0^n$ and ending with $1^n$.
Let $\mathcal{A}_3 = 0011$. $D^{-1}(0011) = \{00010, 11101\}$ and $\mathcal{A}_4 = 000\underline{10}111$, where the overlap is underlined. Continuing, $D^{-1}(00010111) = \{000011010, 111100101\}$ and $\mathcal{A}_5 = 00001\underline{1010}01111$.
Below is a table of the length of aperiodic orientable sequences obtained from this construction compared to sequence lengths found in [BM93] and the trivial upper bound.
(Aperiodic) orientable sequences have application in robotic position-location systems allowing a robot to determine both its location and orientation.
» C program
[BM93] J. Burns and C. J. Mitchell. Position sensing coding schemes, Cryptography and Coding III (M.J. Ganley, ed.), Oxford University Press, Oxford, 1993, pp. 31-66.
[DMRW93] Z.-D. Dai, K. M. Martin, M. J. B. Robshaw, and P. R. Wild. Orientable sequences, Cryptography and Coding III (M.J. Ganley, ed.), Oxford University Press, Oxford, 1993, pp. 97–115.
[MW21] C. J. Mitchell and P. R. Wild. Constructing orientable sequences, arXiv:2108.03069v1, 2021.
Interested in generating other combinatorial objects? Visit http://combos.org | CommonCrawl |
nLab > Latest Changes: foundation of mathematics
CommentTimeSep 5th 2012
(edited Sep 5th 2012)
Format: MarkdownItexat _[[foundation of mathematics]]_ I have tried to start an [Idea](http://ncatlab.org/nlab/show/foundation+of+mathematics#Idea)-section. Also, I am hereby moving a bunch of old discussion boxes from there to here: *** [ begin forwarded discussion ] +-- {: .query} _[[Urs Schreiber|Urs]] asks_: Concerning the last parenthetical remark: I suppose in this manner one could imagine $(n+1)$-categories as a foundation for $n$-categories? What happens when we let $n \to \infty$? _[[Toby Bartels|Toby]] answers_: That goes in the last, as yet unwritten, section. =-- +-- {: .query} _[[Urs Schreiber|Urs]] asks_: Can you say what the problem is? _[[Toby Bartels|Toby]] answers_: I\'d say that it proved to be overkill; ETCS is simpler and no less conceptual. In ETCC (or whatever you call it), you can neatly define a group (for example) as a category with certain properties rather than as a set with certain structure. But then you still have to define a topological space (for example) as a set with certain structure (where a set is defined to be a discrete category, of course). I think that Lawvere himself still wants an ETCC, but everybody else seems to have decided to stick with ETCS. _Roger Witte_ asks: Surely in ETCC, you define complete Heyting algebras as particular kinds of category and then work with Frames and Locales (ie follow Paul Taylor's leaf and apply Stone Duality). You should be able to get to Top by examining relationships between Loc and Set. I thought Top might be the the comma category of forgetful functor from loc to set op and the contravariant powerset functor. Thus a Topological space would consist of a triple S, L, f where S is a set, L is a locale and f is a function from the objects of the locale to the powerset of S. A continuous function from S, L, f to S', L', f' is a pair g, h where g is a function from the powerset of S' to the powerset of S and g is a frame homomorphism from L' to L and _(I don't know how to draw the commutation square)_. However I think this has too many spaces since lattice structures other than the inclusion lattice can be used to define open sets. _Toby_: It\'s straightforward to define a topological space as a set equipped with a subframe of its power set. So you can define it as a set $S$, a frame $F$, and a frame monomorphism $f\colon F \to P(S)$, or equivalently as a set $S$, a locale $L$, and an epimorphism $f\colon L \to Disc(S)$ of locales, where $Disc(S)$ is the [[discrete space]] on $S$ as a locale. (Your 'However, [...]' sentence is because you didn\'t specify epimorphism/monomorphism.) This is a good perspective, but I don\'t think that it\'s any cleaner in ETCC than in ETCS. _Roger Witte_ says Thanks, Toby. I agree with your last sentence but my point is that this approach is equally clean and easy in both systems. The clean thing about ETCC is the uniformity of meta theory and model theory as category theory. The clean thing about ETCS is that we have just been studying sets for 150 years, so we have a good intuition for them. I was responding to your point 'ETCC is less clean because you have to define some things (eg topological spaces) as sets with a structure'. But you can define and study the structure without referring to the sets and then 'bolt on' the sets (almost like an afterthought). [[Mike Shulman]]: In particular cases, yes. I thought the point Toby was trying to make is that only some kinds of structure lend themselves to this naturally. Groups obviously do. Perhaps topological spaces were a poorly chosen example of something that doesn't, since as you point out they can naturally be defined via frames. But consider, for instance, a [[metric space]]. Or a [[graph]]. Or a [[uniform space]]. Or a [[semigroup]]. All of these structures can be easily defined in terms of sets, but I don't see a natural way to define them in terms of categories without going through discrete categories = sets. _Toby_: Roger, I don\'t understand how you intend to bolt on sets at the end. If I define a topological space as a set $S$, a frame $F$, and a frame monomorphism from $F$ to the power frame of $S$, how do I remove the set from this to get something that I can bolt the set onto afterwards? With semigroups, I can see how, from a certain perspective, it\'s just as well to study the [[Lawvere theory]] of semigroups as a cartesian category, but I don\'t see what to do with topological spaces. _Roger Witte_ says If we want to found mathematics in ETCC we want to work on nice categories rather than nice objects. Nice objects in not nice categories are hard work (and probably 'evil' to somke extent). Thus the answer to Toby is that to do topology in ETCC you do as much as possible in Locale theory (ie pointless topology) and then when you finally need to do stuff with points, you create Top as a comma like construction (ie you never take away the points but you avoid introducing them as long as possible). Is it not true that the only reason you want to introduce points is so that you can test them for equality/inequality (as opposed to topological separation)? Mike, I spent about two weeks trying to figure out how to get around Toby's objection 'topology' and now you chuck four more examples at me. My gut feeling is that the category of directed graphs is found by taking the skeleton of CAT, that metric locales are regular locales with some extra condition to ensure a finite basis, that Toby can mak [ to be continued in next comment ]
at foundation of mathematics I have tried to start an Idea-section.
Also, I am hereby moving a bunch of old discussion boxes from there to here:
[ begin forwarded discussion ]
+– {: .query} Urs asks: Concerning the last parenthetical remark: I suppose in this manner one could imagine (n+1)(n+1)-categories as a foundation for nn-categories? What happens when we let n→∞n \to \infty?
Toby answers: That goes in the last, as yet unwritten, section. =–
+– {: .query} Urs asks: Can you say what the problem is?
Toby answers: I'd say that it proved to be overkill; ETCS is simpler and no less conceptual. In ETCC (or whatever you call it), you can neatly define a group (for example) as a category with certain properties rather than as a set with certain structure. But then you still have to define a topological space (for example) as a set with certain structure (where a set is defined to be a discrete category, of course). I think that Lawvere himself still wants an ETCC, but everybody else seems to have decided to stick with ETCS.
Roger Witte asks: Surely in ETCC, you define complete Heyting algebras as particular kinds of category and then work with Frames and Locales (ie follow Paul Taylor's leaf and apply Stone Duality). You should be able to get to Top by examining relationships between Loc and Set. I thought Top might be the the comma category of forgetful functor from loc to set op and the contravariant powerset functor. Thus a Topological space would consist of a triple S, L, f where S is a set, L is a locale and f is a function from the objects of the locale to the powerset of S. A continuous function from S, L, f to S', L', f' is a pair g, h where g is a function from the powerset of S' to the powerset of S and g is a frame homomorphism from L' to L and (I don't know how to draw the commutation square). However I think this has too many spaces since lattice structures other than the inclusion lattice can be used to define open sets.
Toby: It's straightforward to define a topological space as a set equipped with a subframe of its power set. So you can define it as a set SS, a frame FF, and a frame monomorphism f:F→P(S)f\colon F \to P(S), or equivalently as a set SS, a locale LL, and an epimorphism f:L→Disc(S)f\colon L \to Disc(S) of locales, where Disc(S)Disc(S) is the discrete space on SS as a locale. (Your 'However, […]' sentence is because you didn't specify epimorphism/monomorphism.) This is a good perspective, but I don't think that it's any cleaner in ETCC than in ETCS.
Roger Witte says Thanks, Toby. I agree with your last sentence but my point is that this approach is equally clean and easy in both systems. The clean thing about ETCC is the uniformity of meta theory and model theory as category theory. The clean thing about ETCS is that we have just been studying sets for 150 years, so we have a good intuition for them.
I was responding to your point 'ETCC is less clean because you have to define some things (eg topological spaces) as sets with a structure'. But you can define and study the structure without referring to the sets and then 'bolt on' the sets (almost like an afterthought).
Mike Shulman: In particular cases, yes. I thought the point Toby was trying to make is that only some kinds of structure lend themselves to this naturally. Groups obviously do. Perhaps topological spaces were a poorly chosen example of something that doesn't, since as you point out they can naturally be defined via frames. But consider, for instance, a metric space. Or a graph. Or a uniform space. Or a semigroup. All of these structures can be easily defined in terms of sets, but I don't see a natural way to define them in terms of categories without going through discrete categories = sets.
Toby: Roger, I don't understand how you intend to bolt on sets at the end. If I define a topological space as a set SS, a frame FF, and a frame monomorphism from FF to the power frame of SS, how do I remove the set from this to get something that I can bolt the set onto afterwards? With semigroups, I can see how, from a certain perspective, it's just as well to study the Lawvere theory of semigroups as a cartesian category, but I don't see what to do with topological spaces.
Roger Witte says If we want to found mathematics in ETCC we want to work on nice categories rather than nice objects. Nice objects in not nice categories are hard work (and probably 'evil' to somke extent). Thus the answer to Toby is that to do topology in ETCC you do as much as possible in Locale theory (ie pointless topology) and then when you finally need to do stuff with points, you create Top as a comma like construction (ie you never take away the points but you avoid introducing them as long as possible). Is it not true that the only reason you want to introduce points is so that you can test them for equality/inequality (as opposed to topological separation)?
Mike, I spent about two weeks trying to figure out how to get around Toby's objection 'topology' and now you chuck four more examples at me. My gut feeling is that the category of directed graphs is found by taking the skeleton of CAT, that metric locales are regular locales with some extra condition to ensure a finite basis, that Toby can mak
[ to be continued in next comment ]
Format: MarkdownItex[ continuation of forwarded discussion ] +-- {: .query} _[[Urs Schreiber|Urs]] says_: I like _categorial_. If we think we can improve on existing terminology we should feel free to introduce it here. _[[Toby Bartels|Toby]] responds_: Once you use 'categorial' when discussing logic, it\'s hard to justify using 'categorical' in other contexts. I suppose that I could go through the whole wiki and change 'categorical' to 'categorial' .... _[[Urs Schreiber|Urs]] says_: with respect to optimal terminology I think that one problem is that the very term "category" is not optimal. _[[Toby Bartels|Toby]] responds_: I think that it\'s too late to play Bourbaki with that. Or do you want to say 'semigroupoid'? (which occasionally appears in groupoid literature but most properly would not have identity morphisms). I got a lot of stares over dinner at Groupoidfest when I said that that might be a better term after all (for categories as algebraic objects). _[[Mike Shulman|Mike]] comments_: I think it is probably too late to play Bourbaki with "categorical" as well; too many people are using it who don't care about logic. However, there is another option here: I like to call ETCS-like theories _structural set theories_ rather than "categori(c)al foundations". As Lawvere and others have pointed out, ETCS is still a theory of sets; it merely differs from traditional set theories such as ZF in its lack of a global membership predicate and in what notions it takes as basic. _Toby_: Can we at least say 'category-theoretic' instead of 'categorical'? It can be very disconcerting to read, for example on [[tensor product]], 'More categorically, this can be constructed as the coequalizer of the two maps [...]'. (If anything, this is *less* categorical than the explicit previous construction by modding out relations, since the previous can be formalised in a membership-based set theory in which it is defined up to set-theoretic identity, while the other, however formalised, is defined only up to unique isomorphism. Of course, that should not *really* be considered less categorical, but it\'s still hardly more categorical.) To be unambiguous, this should be 'More categorially, [...]' or, if you don\'t like that word, 'More category-theoretically [...]'. (Or do you think that 'More structurally [...]' will work here?) [[Mike Shulman|Mike]]: At some point, I think one just has to accept that some words have more than one meaning, and learn to deal with it. I appreciate that it can be disconcerting to a logician or philosopher to read "categorical" used to mean "category-theoretic," but it is of course just as confusing to a category-theorist when it is used to mean "having a unique model." Undoubtedly the logicians were there first, but the time to have that argument was back when Eilenberg and Mac Lane chose the word "category." And although I am of course biased, I think that "category-theoretic" is a more important notion in mathematics than "having a unique model." And "categorial" looks to me like a misspelling. _Toby_: Erm, no, while the term 'category' may have some problems that one might have objected to in 1945, that was not the time to anticipate that people might later apply it to logic and then use the adjective 'categorical'. I understand why you don\'t like 'categorial', so what\'s wrong with 'category-theoretic' or 'structural'? (when discussing logic, that is). _Mike_: I didn't mean to say that anyone should have been able to anticipate the conflict of "categorical" related to categories with its use in logic back in 1945. I just meant that by now, when "category" has a universally established meaning, I don't think it's reasonable to object to the use of "categorical" to mean "related to categories." I like "structural" when it applies, but to me it has a different meaning than "categorical"---this is the point I'm trying to make with [[SEAR]], that structural set theory doesn't depend on category theory. Finally, "category-theoretically" has almost twice as many syllables and letters as "categorically," and sounds awkward. I am sort of starting to soften towards "categorial," at least when talking to logicians or about logic. There's enough antagonism towards category theory in the logic/foundations community without adding to the perception that we're stealing established terminology. But I'm not yet convinced that it's worth going on a crusade to eliminate "categorical" referring to category theory from everyday mathematical speech. _Toby_: I don\'t feel on a crusade, but I avoid using 'categorical'. I often used 'categorial', or I\'ll use 'category-theoretic' or 'structural' if that seems more appropriate. (In the case of the tensor product, for example, I think that using 'structural' gets across the idea perfectly well; if someone has already written 'categorical' on the wiki and its seems disconcerting, then I wouldn\'t change it to 'categorial' but might change it to 'category-theoretic'.) =-- [ end of forwarded discussion ]
[ continuation of forwarded discussion ]
+– {: .query} Urs says: I like categorial. If we think we can improve on existing terminology we should feel free to introduce it here.
Toby responds: Once you use 'categorial' when discussing logic, it's hard to justify using 'categorical' in other contexts. I suppose that I could go through the whole wiki and change 'categorical' to 'categorial' ….
Urs says: with respect to optimal terminology I think that one problem is that the very term "category" is not optimal.
Toby responds: I think that it's too late to play Bourbaki with that. Or do you want to say 'semigroupoid'? (which occasionally appears in groupoid literature but most properly would not have identity morphisms). I got a lot of stares over dinner at Groupoidfest when I said that that might be a better term after all (for categories as algebraic objects).
Mike comments: I think it is probably too late to play Bourbaki with "categorical" as well; too many people are using it who don't care about logic. However, there is another option here: I like to call ETCS-like theories structural set theories rather than "categori(c)al foundations". As Lawvere and others have pointed out, ETCS is still a theory of sets; it merely differs from traditional set theories such as ZF in its lack of a global membership predicate and in what notions it takes as basic.
Toby: Can we at least say 'category-theoretic' instead of 'categorical'? It can be very disconcerting to read, for example on tensor product, 'More categorically, this can be constructed as the coequalizer of the two maps […]'. (If anything, this is less categorical than the explicit previous construction by modding out relations, since the previous can be formalised in a membership-based set theory in which it is defined up to set-theoretic identity, while the other, however formalised, is defined only up to unique isomorphism. Of course, that should not really be considered less categorical, but it's still hardly more categorical.) To be unambiguous, this should be 'More categorially, […]' or, if you don't like that word, 'More category-theoretically […]'. (Or do you think that 'More structurally […]' will work here?)
Mike: At some point, I think one just has to accept that some words have more than one meaning, and learn to deal with it. I appreciate that it can be disconcerting to a logician or philosopher to read "categorical" used to mean "category-theoretic," but it is of course just as confusing to a category-theorist when it is used to mean "having a unique model." Undoubtedly the logicians were there first, but the time to have that argument was back when Eilenberg and Mac Lane chose the word "category." And although I am of course biased, I think that "category-theoretic" is a more important notion in mathematics than "having a unique model." And "categorial" looks to me like a misspelling.
Toby: Erm, no, while the term 'category' may have some problems that one might have objected to in 1945, that was not the time to anticipate that people might later apply it to logic and then use the adjective 'categorical'. I understand why you don't like 'categorial', so what's wrong with 'category-theoretic' or 'structural'? (when discussing logic, that is).
Mike: I didn't mean to say that anyone should have been able to anticipate the conflict of "categorical" related to categories with its use in logic back in 1945. I just meant that by now, when "category" has a universally established meaning, I don't think it's reasonable to object to the use of "categorical" to mean "related to categories." I like "structural" when it applies, but to me it has a different meaning than "categorical"—this is the point I'm trying to make with SEAR, that structural set theory doesn't depend on category theory. Finally, "category-theoretically" has almost twice as many syllables and letters as "categorically," and sounds awkward.
I am sort of starting to soften towards "categorial," at least when talking to logicians or about logic. There's enough antagonism towards category theory in the logic/foundations community without adding to the perception that we're stealing established terminology. But I'm not yet convinced that it's worth going on a crusade to eliminate "categorical" referring to category theory from everyday mathematical speech.
Toby: I don't feel on a crusade, but I avoid using 'categorical'. I often used 'categorial', or I'll use 'category-theoretic' or 'structural' if that seems more appropriate. (In the case of the tensor product, for example, I think that using 'structural' gets across the idea perfectly well; if someone has already written 'categorical' on the wiki and its seems disconcerting, then I wouldn't change it to 'categorial' but might change it to 'category-theoretic'.) =–
[ end of forwarded discussion ]
Format: MarkdownItexThanks, Urs. I have added to the [[foundation of mathematics]] (under references) backpermalink to this archived version of the old discussion. IMHO we should always keep at entries backlinks to the archived versions of removed discussions, unless they are extremely obsolete or nonsubstantial. Otherwise the discussion is lost in $n$Forum.
Thanks, Urs. I have added to the foundation of mathematics (under references) backpermalink to this archived version of the old discussion. IMHO we should always keep at entries backlinks to the archived versions of removed discussions, unless they are extremely obsolete or nonsubstantial. Otherwise the discussion is lost in nnForum.
Format: MarkdownItexI think entries should only point to relevant information, not to every discussion vaguely related. I don't think my questions to Toby from years ago in the above are relevant enough. I'd rather have them not linked from the entry. I'd rather find 5 minutes to add to the entry a nice and polished paragraph that contains the outcome of these discussions.
I think entries should only point to relevant information, not to every discussion vaguely related. I don't think my questions to Toby from years ago in the above are relevant enough. I'd rather have them not linked from the entry. I'd rather find 5 minutes to add to the entry a nice and polished paragraph that contains the outcome of these discussions.
Format: MarkdownItexI do not know. Here is about 4 pages of discussion among several people, some of which is interesting and not absorbed in the entry. The purpose of references is not to point to the material identical to the entries nor necessary central to the entry but to point to expansions and further directions and related items. I do not know, I had the impression that it was not obsolete.
I do not know. Here is about 4 pages of discussion among several people, some of which is interesting and not absorbed in the entry. The purpose of references is not to point to the material identical to the entries nor necessary central to the entry but to point to expansions and further directions and related items. I do not know, I had the impression that it was not obsolete.
Format: MarkdownItexI just think that none of those people over the years felt that any of this was relevant or polished enough to put it into an nLab entry as usual. It sure stayed there only because everybody forgot about those discussion boxes. Which is not to say that we should not work on that entry. That entry deserves to be improved! But it improves by adding contentful focused paragraphs, not random and unfocused chitchat. That is better had here in a discussion forum.
I just think that none of those people over the years felt that any of this was relevant or polished enough to put it into an nLab entry as usual. It sure stayed there only because everybody forgot about those discussion boxes.
Which is not to say that we should not work on that entry. That entry deserves to be improved! But it improves by adding contentful focused paragraphs, not random and unfocused chitchat. That is better had here in a discussion forum.
Format: MarkdownItexBut it is lost if not backlinked. I think that when I get into such a discussion it absorbs about half an afternoon, so it is often a pity to loose it, from my subjective perspective. Having something idle for one year is little in comparison to the timespan of idle logs of most unfinished mathematical thoughts I usually have (I understand it might look much longer time for superquick, focused and superefficient people like you) including unfinished circles of$n$Lab entries. Backlink is about half a line, and if one has further discussion boxes in the same entry later it is likely that the person who removes that one will than archive it in the same $n$Forum spot what will once be useful. Otherwise, in long term, during the many years, the accumulated queries from one entry will be in many $n$Forum threads (( I am just talking now the matter of the principle. ))
But it is lost if not backlinked. I think that when I get into such a discussion it absorbs about half an afternoon, so it is often a pity to loose it, from my subjective perspective. Having something idle for one year is little in comparison to the timespan of idle logs of most unfinished mathematical thoughts I usually have (I understand it might look much longer time for superquick, focused and superefficient people like you) including unfinished circles ofnnLab entries. Backlink is about half a line, and if one has further discussion boxes in the same entry later it is likely that the person who removes that one will than archive it in the same nnForum spot what will once be useful. Otherwise, in long term, during the many years, the accumulated queries from one entry will be in many nnForum threads (( I am just talking now the matter of the principle. ))
Format: MarkdownItexI think I'm with Zoran here. Such discussions can be of both scientific and historical interest (providing inner thoughts and motivations which are a stimulus to many), and an unobtrusive backlink wouldn't mar an otherwise fine (or polished) article. The Lab-book analogy would seem to support the idea of keeping information which is useful to researchers like Zoran accessible.
I think I'm with Zoran here. Such discussions can be of both scientific and historical interest (providing inner thoughts and motivations which are a stimulus to many), and an unobtrusive backlink wouldn't mar an otherwise fine (or polished) article.
The Lab-book analogy would seem to support the idea of keeping information which is useful to researchers like Zoran accessible.
Format: MarkdownItexI don't think that *all* discussions should be linked from the page. Often, a discussion reaches some definite conclusion, which can then be incorporated into the page in a way that a future reader can obtain exactly the same information as the discussion would have provided, more easily and with less effort. However, when a discussion is not about some mathematical point and/or doesn't reach a definite conclusion, then I think it may be valuable for a reader to be pointed to it. Probably these particular discussions are in the latter category.
I don't think that all discussions should be linked from the page. Often, a discussion reaches some definite conclusion, which can then be incorporated into the page in a way that a future reader can obtain exactly the same information as the discussion would have provided, more easily and with less effort. However, when a discussion is not about some mathematical point and/or doesn't reach a definite conclusion, then I think it may be valuable for a reader to be pointed to it. Probably these particular discussions are in the latter category.
Format: MarkdownItexI am thinking two things: 1. the nForum is to the nLab as what the talk-pages are on Wikipedia. On Wikipedia you'd be irritated if the Literature-section pointed you to the page's talk-page! You want the literature section to point you to stable, relevant information. 1. The number of seconds it takes to choose one point of that old discusison and turn it into a short useful paragraph on the entry is comparable to the seconds it takes to link to that discussion here and then discuss that move. But the former seconds would be better invested! ;-)
I am thinking two things:
the nForum is to the nLab as what the talk-pages are on Wikipedia. On Wikipedia you'd be irritated if the Literature-section pointed you to the page's talk-page! You want the literature section to point you to stable, relevant information.
The number of seconds it takes to choose one point of that old discusison and turn it into a short useful paragraph on the entry is comparable to the seconds it takes to link to that discussion here and then discuss that move. But the former seconds would be better invested! ;-)
Format: MarkdownItexUrs, with regard to point 1, my own thinking is that we should not be comparing the nLab to WP. This is not only with regard to NPOV/nPOV, but also in the idea of this being a public Lab book where we record our notes and sometimes rough thoughts (although if we are moved to work on polishing and making things look more like an encyclopedia, that can be fine as well). Please reread the first standout box on the [Home Page](http://ncatlab.org/nlab/show/HomePage)! I agree with Mike that we need not indiscriminately link to every old discussion, especially those that seem like a beginner is floundering around, but more substantive discussions which did not reach a firm conclusion can be linked to in an unobtrusive way (with a flag which indicates tentativeness of the discussion, if need be). This is desirable in view of the avowed value such discussions have for researchers like Zoran.
Urs, with regard to point 1, my own thinking is that we should not be comparing the nLab to WP. This is not only with regard to NPOV/nPOV, but also in the idea of this being a public Lab book where we record our notes and sometimes rough thoughts (although if we are moved to work on polishing and making things look more like an encyclopedia, that can be fine as well).
Please reread the first standout box on the Home Page!
I agree with Mike that we need not indiscriminately link to every old discussion, especially those that seem like a beginner is floundering around, but more substantive discussions which did not reach a firm conclusion can be linked to in an unobtrusive way (with a flag which indicates tentativeness of the discussion, if need be). This is desirable in view of the avowed value such discussions have for researchers like Zoran.
Format: MarkdownItexWhat is the value of the discussion in this case? If it is valuable, why does nobody extract the punchline? I think, instead, the discussion above is not valuable. I think it is confused and uninformed. Back when we had that discussion, we were all lacking the crucial insights needed to anwer that question with which I started the discussion. I already found it embarrassing to copy that old discussion over to here. I did this only to stick to the rules. I would just entirely delete that old discussion, if I felt it was socially acceptable. These questions: "what is the right foundation for directed homotopy theory?", we have discussed in an informed and actually useful way elsewhere.
What is the value of the discussion in this case? If it is valuable, why does nobody extract the punchline?
I think, instead, the discussion above is not valuable. I think it is confused and uninformed. Back when we had that discussion, we were all lacking the crucial insights needed to anwer that question with which I started the discussion.
I already found it embarrassing to copy that old discussion over to here. I did this only to stick to the rules. I would just entirely delete that old discussion, if I felt it was socially acceptable.
These questions: "what is the right foundation for directed homotopy theory?", we have discussed in an informed and actually useful way elsewhere.
Format: MarkdownItexThe value of the discussion in this case lies in the discussion itself. There is not, as far as I can see, a "punchline" which can be "extracted" from it.
The value of the discussion in this case lies in the discussion itself. There is not, as far as I can see, a "punchline" which can be "extracted" from it.
Format: MarkdownItexUrs, as far as I can tell, there are _several_ query boxes you removed. The first one, where you asked the question about $n$-categories, got a very brief reply from Toby. I have no issue with _not_ linking to _that_. The second was also started by you, but it seems to have to do with Lawvere's ETCC. There the discussion does not seem **confused** and **uninformed** (strong words!) -- at least participants like Toby and Mike seem not to be uninformed. How important the discussion is, that's something we each have to decide individually, but it was probably written close to around the same time as a long discussion at the Cafe which involved Arnold Neumaier, which was of some significance I believe, and it might be useful as a way of jogging memories for people like Zoran. The third was a discussion about 'categorial' vs. 'categorical', which I personally find tiresome and not very interesting. I personally wouldn't care if that wasn't linked back to. But I think the real issue is this: > If it is valuable, why does nobody extract the punchline? Well, because for one we're busy and maybe don't feel like dropping other work to extract punchlines! Until such time as we're ready to break all links to an earlier discussion which may yet contain food for thought for some, why do you so strenuously object to a little unobtrusive link?? I have to go now, and not sure how much time I want to argue about this...
Urs, as far as I can tell, there are several query boxes you removed. The first one, where you asked the question about nn-categories, got a very brief reply from Toby. I have no issue with not linking to that.
The second was also started by you, but it seems to have to do with Lawvere's ETCC. There the discussion does not seem confused and uninformed (strong words!) – at least participants like Toby and Mike seem not to be uninformed. How important the discussion is, that's something we each have to decide individually, but it was probably written close to around the same time as a long discussion at the Cafe which involved Arnold Neumaier, which was of some significance I believe, and it might be useful as a way of jogging memories for people like Zoran.
The third was a discussion about 'categorial' vs. 'categorical', which I personally find tiresome and not very interesting. I personally wouldn't care if that wasn't linked back to.
But I think the real issue is this:
If it is valuable, why does nobody extract the punchline?
Well, because for one we're busy and maybe don't feel like dropping other work to extract punchlines! Until such time as we're ready to break all links to an earlier discussion which may yet contain food for thought for some, why do you so strenuously object to a little unobtrusive link??
I have to go now, and not sure how much time I want to argue about this…
Format: MarkdownItex> The third was a discussion about 'categorial' vs. 'categorical', which I personally find tiresome and not very interesting. Regardless of how tiresome it is, I think there is some use in keeping it around in case someone new comes along and wants to have the same discussion. That way they can read the old discussion first and hopefully feel as though everything they wanted to say was already brought up and doesn't need to be said again.
The third was a discussion about 'categorial' vs. 'categorical', which I personally find tiresome and not very interesting.
Regardless of how tiresome it is, I think there is some use in keeping it around in case someone new comes along and wants to have the same discussion. That way they can read the old discussion first and hopefully feel as though everything they wanted to say was already brought up and doesn't need to be said again.
Format: MarkdownItex@Mike #15: sure, I can see that (and wouldn't _object_ to a link!). And I mean no disrespect who find the topic to be of interest.
@Mike #15: sure, I can see that (and wouldn't object to a link!). And I mean no disrespect who find the topic to be of interest.
Format: MarkdownItexI'm inclined to say that a link is warranted iff somebody thinks that a link is warranted. If Urs doubted that anybody would find the link useful, then I don't blame him for leaving it out. If Zoran wants the link only as a matter of principle, then I wouldn't put it in. But if Zoran (or anybody else) finds the link useful, then it should stay.
I'm inclined to say that a link is warranted iff somebody thinks that a link is warranted. If Urs doubted that anybody would find the link useful, then I don't blame him for leaving it out. If Zoran wants the link only as a matter of principle, then I wouldn't put it in. But if Zoran (or anybody else) finds the link useful, then it should stay.
Format: MarkdownItexI clearly stated in post 3 "unless they are extremely obsolete or nonsubstantial" in accordance with 17. Hence I never meant that every query which is archived should be linked and I even think that they are such rare queries which do not even warrant $n$Forum archiving. Here I thought that it is substantial because it has a discussion of EETC and alike, which I have no time to comprehend at the moment but would like to relate to when in this subject next time. I also do not feel particularly strong about how to handle this or that particular entry.
I clearly stated in post 3 "unless they are extremely obsolete or nonsubstantial" in accordance with 17. Hence I never meant that every query which is archived should be linked and I even think that they are such rare queries which do not even warrant nnForum archiving. Here I thought that it is substantial because it has a discussion of EETC and alike, which I have no time to comprehend at the moment but would like to relate to when in this subject next time. I also do not feel particularly strong about how to handle this or that particular entry.
Format: MarkdownItex>Here I thought that it is substantial because it has a discussion of EETC and alike, which I have no time to comprehend at the moment but would like to relate to when in this subject next time. So this meets the requirement in my >if Zoran (or anybody else) finds the link useful, then it should stay.
Here I thought that it is substantial because it has a discussion of EETC and alike, which I have no time to comprehend at the moment but would like to relate to when in this subject next time.
So this meets the requirement in my
if Zoran (or anybody else) finds the link useful, then it should stay. | CommonCrawl |
The Creation of BIRS
Governance of the Station
Scientific Management
Scientific Director
Location of BIRS
Reply to a Workshop Invitation
Organizer Interface
BIRS and Banff Centre Facilities
Calendar of All Events
General Program Descriptions
Calendar of All BIRS Events
List of All Events
Workshops at CMO (Mexico)
List of 5-Day Workshops
Focused Research Groups
Research in Teams
Accept/Decline an Invitation
Add Participants
Participant List Submission/Status
Enter your Schedule
Proposal Submission Form
For Scientific Board
Proposal Review System
Calendar Posters
Getting to BIRS
19w5088 HomeConfirmed Participants
ScheduleWorkshop Videos
Schedule for: 19w5088 - Algebraic Techniques in Computational Complexity
Arriving in Banff, Alberta on Sunday, July 7 and departing Friday July 12, 2019
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner ↓
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
07:00 - 08:45 Breakfast ↓
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
08:45 - 09:00 Introduction and Welcome by BIRS Staff ↓
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
09:00 - 09:50 Ryan O'Donnell: Explicit near-Ramanujan graphs of every degree ↓
For every constant d >= 3 and epsilon > 0, we give a deterministic poly(n)-time algorithm that outputs a d-regular graph on Θ(n) vertices that is epsilon-near-Ramanujan; i.e., its eigenvalues are bounded in magnitude by 2sqrt(d-1)+epsilon (excluding the single trivial eigenvalue of d).
09:55 - 10:45 Dana Moshkovitz: Nearly Optimal Pseudorandomness From Hardness ↓
Existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown. Specifically, assuming exponential lower bounds against nondeterministic circuits, we convert any randomized algorithm that errs rarely into a deterministic algorithm with a similar running time (with pre-processing), and any general randomized algorithm into a deterministic algorithm whose runtime is slower by a nearly linear multiplicative factor. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+alpha)log s for an arbitrarily small constant alpha>0, under the assumption that there exists a function f in E that requires nondeterministic circuits of size at least 2^{(1-alpha')n}, where alpha = O(alpha'). The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.
10:45 - 11:15 Coffee Break (TCPL Foyer)
11:15 - 12:05 Eshan Chattopadhyay: Pseudorandomness from the Fourier Spectrum ↓
We describe new ways of constructing pseudorandom generators for Boolean functions that satisfy certain bounds on their Fourier spectrum. We discuss the possibility of using this approach to construct pseudorandom generators for complexity classes that have eluded researches for decades. Based on joint works with Pooya Hatami, Kaave Hosseini, Shachar Lovett and Avishay Tal.
12:05 - 12:10 Group Photo (TCPL Foyer)
12:10 - 13:30 Lunch ↓
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
13:30 - 14:20 William Hoza: Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas ↓
We give an explicit pseudorandom generator (PRG) for read-once $\mathbf{AC}^0$, i.e., constant-depth read-once formulas over the basis $\{\wedge, \vee, \neg\}$ with unbounded fan-in. The seed length of our PRG is $\widetilde{O}(\log(n/\varepsilon))$. Previously, PRGs with near-optimal seed length were known only for the depth-$2$ case (Gopalan et al. FOCS '12). For a constant depth $d > 2$, the best prior PRG is a recent construction by Forbes and Kelley with seed length $\widetilde{O}(\log^2 n + \log n \log(1/\varepsilon))$ for the more general model of constant-width read-once branching programs with arbitrary variable order (FOCS '18). Looking beyond read-once $\mathbf{AC}^0$, we also show that our PRG fools read-once $\mathbf{AC}^0[\oplus]$ with seed length $\widetilde{O}(t + \log(n/\varepsilon))$, where $t$ is the number of parity gates in the formula. Our construction follows Ajtai and Wigderson's approach of iterated pseudorandom restrictions (Advances in Computing Research '89). We assume by recursion that we already have a PRG for depth-$d$ $\mathbf{AC}^0$ formulas. To fool depth-$(d + 1)$ $\mathbf{AC}^0$ formulas, we use the given PRG, combined with a small-bias distribution and almost $k$-wise independence, to sample a pseudorandom restriction. The analysis of Forbes and Kelley shows that our restriction approximately preserves the expectation of the formula. The crux of our work is showing that after $\text{poly}(\log \log n)$ independent applications of our pseudorandom restriction, the formula simplifies in the sense that every gate other than the output has only $\text{polylog} n$ remaining children. Finally, as the last step, we use a recent PRG by Meka, Reingold, and Tal (STOC '19) to fool this simpler formula. Joint work with Dean Doron and Pooya Hatami.
19:30 - 20:15 Shachar Lovett: Hao Huang's proof of the Sensitivity Conjecture ↓
http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pdf
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:50 Scott Aaronson: Quantum Lower Bounds via Laurent Polynomials ↓
Ever since Beals et al. made the connection in 1998, the polynomial method has been one of the central tools for understanding the limitations of quantum algorithms. Here I'll explain a new variant of this method, which uses Laurent polynomials (polynomials that can have negative exponents). Together with colleagues William Kretschmer, Robin Kothari, and Justin Thaler, we've been able to use the emerging "Laurent polynomial method" (1) to prove a tight lower bound for approximately counting a set S, given not only a membership oracle for S but also copies of the state |S> and the ability to reflect about the state, (2) to prove an oracle separation between the classes SBP and QMA, showing that in the black-box setting, "there are no succinct quantum proofs that a set is large," and (3) for other applications still in development. No quantum computing background is needed for this talk. Preprint at https://arxiv.org/abs/1904.08914
09:55 - 10:45 Shubhangi Saraf: Factors of sparse polynomials: structural results and some algorithms ↓
Are factors of sparse polynomials sparse? This is a really basic question and we are still quite far from understanding it in general. In this talk, I will discuss a recent result showing that this is in some sense true for multivariate polynomials when the polynomial has each variable appearing only with bounded degree. Our sparsity bound uses techniques from convex geometry, such as the theory of Newton polytopes and an approximate version of the classical Caratheodory's Theorem. Using our sparsity bound, we then show how to devise efficient deterministic factoring algorithms for sparse polynomials of bounded individual degree. The talk is based on joint work with Vishwas Bhargav and Ilya Volkovich.
11:15 - 12:05 Amir Shpilka: Sylvester-Gallai Type Theorems for Quadratic Polynomials ↓
We prove Sylvester-Gallai type theorems for quadratic polynomials. Specifically, we prove that if a finite collection Q, of irreducible polynomials of degree at most 2, satisfy that for every two polynomials Q1,Q2 ∈ Q there is a third polynomial Q3∈Q so that whenever Q1 and Q2 vanish then also Q3 vanishes, then the linear span of the polynomials in Q has dimension O(1). We also prove a colored version of the theorem: If three finite sets of quadratic polynomials satisfy that for every two polynomials from distinct sets there is a polynomial in the third set satisfying the same vanishing condition then all polynomials are contained in an O(1)-dimensional space. This answers affirmatively two conjectures of Gupta [Electronic Colloquium on Computational Complexity (ECCC), 21:130, 2014] that were raised in the context of solving certain depth-4 polynomial identities. To obtain our main theorems we prove a new result classifying the possible ways that a quadratic polynomial Q can vanish when two other quadratic polynomials vanish. Our proofs also require robust versions of a theorem of Edelstein and Kelly (that extends the Sylvester-Gallai theorem to colored sets).
12:05 - 13:30 Lunch (Vistas Dining Room)
13:30 - 14:20 Josh Alman: Efficient Construction of Rigid Matrices Using an NP Oracle ↓
If H is a matrix over a field F, then the rank-r rigidity of H, denoted R_{H}(r), is the minimum Hamming distance from H to a matrix of rank at most r over F. Giving explicit constructions of rigid matrices for a variety of parameter regimes is a central open challenge in complexity theory. In this work, building on Williams' seminal connection between circuit-analysis algorithms and lower bounds [Williams, J. ACM 2014], we give a construction of rigid matrices in P^NP. Letting q = p^r be a prime power, we show: - There is an absolute constant delta>0 such that, for all constants eps>0, there is a P^NP machine M such that, for infinitely many N's, M(1^N) outputs a matrix H_N in {0,1}^{N times N} with rigidity R_{H_N}(2^{(log N)^{1/4 - eps}}) >= delta N^2 over F_q. Using known connections between matrix rigidity and a number of different areas of complexity theory, we derive several consequences of our constructions, including: - There is a function f in TIME[2^{(log n)^{omega(1)}}]^NP such that f notin PH^cc. Previously, it was even open whether E^NP subset PH^cc. - For all eps>0, there is a P^NP machine M such that, for infinitely many N's, M(1^N) outputs an N times N matrix H_N in {0,1}^{N times N} whose linear transformation requires depth-2 F_q-linear circuits of size Omega(N 2^{(log N)^{1/4 - eps}}). The previous best lower bound for an explicit family of N \times N matrices was only Omega(N log^2 N / log log N), for super-concentrator graphs. Joint work with Lijie Chen to appear in FOCS 2019.
17:30 - 19:30 Dinner (Vistas Dining Room)
19:30 - 20:20 Boaz Barak: Instance based complexity: the promised land or a mirage? ↓
Worst-case complexity has been for decades the main paradigm of theoretical CS. The study of worst-case complexity has led to many beautiful upper and lower bounds, but this notion can be sometimes too rigid. In this informal talk I would like to explore whether it is possible to define a meaningful notion of *instance-based complexity*, assigning to any instance X of a computational problem a complexity measure c(X) that captures the running time needed to solve X. There are several obstacles to establishing such a measure, which we'll discuss, but I will offer some directions for progress. I've given a similar talk at the IAS ( https://video.ias.edu/csdm/2019/0415-BoazBarak ) but for the Banff audience I will keep it shorter, less formal, and also discuss some more recent thoughts. This talk will be best enjoyed with beer.
09:00 - 09:50 Tselil Schramm: Max Cut with Linear Programs: Sherali-Adams Strikes Back ↓
Conventional wisdom asserts that linear programs (LPs) perform poorly for max-cut and other constraint satisfaction problems, and that semidefinite programming and spectral methods are needed to give nontrivial bounds. In this talk, I will describe a recent result that stands in contrast to this wisdom: we show that surprisingly small LPs can give nontrivial upper bounds for constraint satisfaction problems. Even more surprisingly, the quality of our bounds depends on the spectral radius of the adjacency matrix of the associated graph. For example, in a random $\Delta$-regular $n$-vertex graph, the $\exp(c \frac{\log n}{\log \Delta})$-round Sherali--Adams LP certifies that the max cut has value at most $50.1\%$. In random graphs with $n^{1.01}$ edges, $O(1)$ rounds suffice; in random graphs with $n \cdot \log(n)$ edges, $n^{O(1/\log \log n)} = n^{o(1)}$ rounds suffice.
09:55 - 10:45 Susanna de Rezende: Lifting with Simple Gadgets and Applications to Circuit and Proof Complexity ↓
Lifting theorems in complexity theory are a method of transferring lower bounds in a weak computational model into lower bounds for a more powerful computational model, via function composition. There has been an explosion of lifting theorems in the last ten years, essentially reducing communication lower bounds to query complexity lower bounds. These theorems only hold for composition with very specific ``gadgets'' such as indexing and inner product. In this talk, we will present a generalization of the theorem lifting Nullstellensatz degree to monotone span program size by Pitassi and Robere (2018) so that it works for any gadget with high enough rank, in particular, for useful gadgets such as equality and greater-than. We will then explain how to apply our generalized theorem to solve two open problems: • We present the first result that demonstrates a separation in proof power for cutting planes with unbounded versus polynomially bounded coefficients. Specifically, we exhibit CNF formulas that can be refuted in quadratic length and constant line space in cutting planes with unbounded coefficients, but for which there are no refutations in subexponential length and subpolynomial line space if coefficients are restricted to be of polynomial magnitude. • We give the first explicit separation between monotone Boolean formulas and monotone real formulas. Namely, we give an explicit family of functions that can be computed with monotone real formulas of nearly linear size but require monotone Boolean formulas of exponential size. Previously only a non-explicit separation was known. This talk is based on joint work with Or Meir, Jakob Nordström, Toniann Pitassi, Robert Robere, and Marc Vinyals.
11:15 - 12:05 Sajin Koroth: Query-to-Communication lifting using low-discrepancy gadgets ↓
Lifting theorems are theorems that relate the query complexity of a function f : {0, 1}^n → {0, 1} to the communication complexity of the composed function f ◦ g^n, for some "gadget" g : {0, 1}^b × {0, 1}^b →{0, 1}. Such theorems allow transferring lower bounds from query complexity to the communication complexity, and have seen numerous applications in the recent years. In addition, such theorems can be viewed as a strong generalization of a direct-sum theorem for the gadget g. We prove a new lifting theorem that works for all gadgets g that have logarithmic length and exponentially-small discrepancy, for both deterministic and randomized communication complexity. Thus, we increase the range of gadgets for which such lifting theorems hold considerably. Our result has two main motivations: First, allowing a larger variety of gadgets may support more applications. In particular, our work is the first to prove a randomized lifting theorem for logarithmic-size gadgets, thus improving some applications the theorem. Second, our result can be seen a strong generalization of a direct-sum theorem for functions with low discrepancy. Joint work with Arkadev Chattopadhyay, Yuval Filmus, Or Meir, Toniann Pitassi
13:30 - 17:30 Free Afternoon (Banff National Park)
09:00 - 09:50 Arkadev Chattopadhyay: The Log-Approximate-Rank Conjecture is False ↓
We construct a simple and total XOR function F on 2n variables that has only O(n) spectral norm, O(n^2) approximate rank and O(n^{2.5}) approximate nonnegative rank. We show it has polynomially large randomized bounded-error communication complexity of Omega(sqrt(n)). This yields the first exponential gap between the logarithm of the approximate rank and randomized communication complexity for total functions. Thus, F witnesses a refutation of the Log-Approximate-Rank Conjecture which was posed by Lee and Shraibman (2007) as a very natural analogue for randomized communication of the still unresolved Log-Rank Conjecture for deterministic communication. The best known previous gap for any total function between the two measures was a recent 4th-power separation by Göös, Jayram, Pitassi and Watson (2017). Remarkably, after our manuscript was published in the public domain, two groups of researchers, Anshu-Boddu-Touchette (2018) and Sinha-de-Wolf (2018), showed independently that the function F even refutes the Quantum-Log-Approximate-Rank Conjecture. (Joint work with Nikhil Mande and Suhail Sherif)
09:55 - 10:45 Shachar Lovett: The sunflower conjecture and connections to TCS ↓
The sunflower conjecture is one of the famous open problems in combinatorics. In attempting to improve the current known bounds, we discovered connections to objects studies in TCS, such as randomness extractors and DNFs, as well as to new questions in pseudo-randomness. I will describe some of these connections and the many open problems that arise. Based on joint works with Ryan Alweiss, Xin Li, Noam Solomon and Jiapeng Zhang.
11:15 - 12:05 Or Meir: Toward the KRW conjecture: On monotone and semi-monotone compositions ↓
Proving super-logarithmic lower bounds on the depth of circuits is one of the main frontiers of circuit complexity. In 1991, Karchmer, Raz and Wigderson observed that we could resolve this question by proving the following conjecture: Given two boolean functions, the depth complexity of their composition is about the sum of their individual depth complexities. While we cannot prove the conjecture yet, there has been some meaningful progress toward such a proof, some of it in the last few years. With the goal of making further progress, we study two interesting variants of the conjecture: First, we consider the analogue of the KRW conjecture for monotone circuits. In this setting, we are able to prove that the conjecture holds for a very broad range of cases: namely, our result holds for every outer function, and for every inner function whose hardness can be established by lifting. This includes in particular several interesting inner functions, such as s-t-connectivity, clique and the GEN function of Raz-McKenzie. For our second variant, we define a natural "semi-monotone" version of the KRW conjecture, which aims to bridge the monotone and the non-monotone settings. While we are not able to prove this version of the conjecture, we prove lower bounds on some closely related problems.
13:30 - 14:20 Mark Bun: Private hypothesis selection ↓
We investigate the problem of differentially private hypothesis selection: Given i.i.d. samples from an unknown probability distribution P and a set of m probability distributions H, the goal is to privately output a distribution from H whose total variation distance to P is comparable to that of the best such distribution. We present several algorithms for this problem which achieve sample complexity similar to those of the best non-private algorithms. These give new and improved learning algorithms for a number of natural distribution classes. Our results also separate the sample complexities of private mean estimation under product vs. non-product distributions.
19:30 - 20:20 Toniann Pitassi: Vignettes: Teaching in Africa, and other personal stories (TCPL 201)
09:00 - 09:50 Benjamin Rossman: Criticality and decision-tree size of regular AC^0 functions ↓
A boolean function F : {0,1}^n -> {0,1} is said to be "k-critical" if it satisfies Pr[ F|R_p has decision-tree depth >= t ] <= (pk)^t for all p and t, where R_p : {x_1,…,x_n} -> {0,1,*} is the p-random restriction. For example, Hastad's Switching Lemma (1986) states that every k-CNF formula is O(k)-critical. I will discuss an alternative switching lemma (using an entropy argument) which shows that size-s CNF formulas are O(log s)-critical. A recent extension of this argument establishes a tight bound of O((log s)/d)^d on the criticality of regular AC^0 formulas of depth d+1, where "regular" means that gates at the same height have equal fan-in. This strengthens several recent results for AC^0 circuits on their decision-tree size, Fourier spectrum, and the complexity of #SAT. (Paper to appear in CCC 2019.)
09:55 - 10:45 Neeraj Kayal: Reconstructing arithmetic formulas using lower bound proof techniques ↓
What is the smallest formula computing a given multivariate polynomial f(x)= ? In this talk I will present a paradigm for translating the known lower bound proofs for various subclasses of formulas into efficient proper learn= ing algorithms for the same subclass. Many lower bounds proofs for various subclasses of arithmetic formulas redu= ce the problem to showing that any expression for f(x) as a sum of =93simpl= e=94 polynomials T_i(x): f(x) =3D T_1(x) + T_2(x) + =85 + T_s(x), the number s of simple summands is large. For example, each simple summand = T_i could be a product of linear forms or a power of a low degree polynomia= l and so on. The lower bound consists of constructing a vector space of linear maps M, e= ach L in M being a linear map from the set of polynomials F[x] to some vect= or space W (typically W is F[X] itself) with the following two properties: (i) For every simple polynomial T, dim(M*T) is small, say = that dim(M*T) <=3D r. (ii) For the candidate hard polynomial f, dim(M*f) is large,= say that dim(M*f) >=3D R. These two properties immediately imply a lower bound: s >=3D R/r. The corresponding reconstruction/proper learning problem is the following: = given f(x) we want to find the simple summands T_1(x), T_2(x), =85, T_s(x) = which add up to f(x). We will see how such a lower bound proof can often be used to solve the rec= onstruction problem. Our main tool will be an efficient algorithmic solutio= n to the problem of decomposing a pair of vector spaces (U, V) under the simu= ltaneous action of a vector space of linear maps from U to V. Along the way we will also obtain very precise bounds on the size of formul= as computing certain explicit polynomials. For example, we will obtain for = every s, an explicit polynomial f(x) that can be computed by a depth three formula of size s but= not by any depth three formula of size (s-1). Based on joint works with Chandan Saha and Ankit Garg.
11:00 - 11:50 Joshua Grochow: Tensor Isomorphism: completeness, graph-theoretic methods, and consequences for Group Isomorphism ↓
We consider the problems of testing isomorphism of tensors, p-groups, cubic forms, algebras, and more, which arise from a variety of areas, including machine learning, group theory, and cryptography. Despite a perhaps seeming similarity with Graph Isomorphism, the current-best algorithms for these problems (when given by bases) are still exponential - for most of them, even q^{n^2} over GF(q). Similarly, while efficient practical software exists for Graph Isomorphism, for these problems even the best current software can only handle very small instances (e.g., 10 x 10 x 10 over GF(13)). This raises the question of finding new algorithmic techniques for these problems, and/or of proving hardness results. We show that all of these problems are equivalent under polynomial-time reductions, giving rise to a class of problems we call Tensor Isomorphism-complete (TI-complete). We further show that testing isomorphism of d-tensors for any fixed d (at least 3) is equivalent to testing isomorphism of 3-tensors. Using the same techniques, we show two first-of-their-kind results for Group Isomorphism (GpI): (a) a reduction from isomorphism of p-groups of exponent p and class c < p, to isomorphism of p-groups of exponent p and class 2, and (b) a search-to-decision reduction for the latter class of groups in time |G|^{O(log log|G|)}. We note that while p-groups of class 2 have long been believed to be the hardest cases of GpI, as far as we are aware this is the first reduction from any larger class to this class of groups. Finally, we discuss a way to apply combinatorial methods from Graph Isomorphism (namely, Weisfeiler-Leman) to Group and Tensor Isomorphism. Based on joint works with Vyacheslav V. Futorny and Vladimir V. Sergeichuk (Lin. Alg. Appl., 2019; arXiv:1810.09219), with Peter A. Brooksbank, Yinan Li, Youming Qiao, and James B. Wilson (arXiv:1905.02518), and with Youming Qiao (arXiv:190X.XXXXX).
11:30 - 12:00 Checkout by Noon ↓
5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
12:05 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)
Site Search / Contact / Programs / About BIRS
©2019 Banff International Research Station for Mathematical Innovation and Discovery. All Rights Reserved. | CommonCrawl |
https://docs.google.com/a/codeaudit.com/document/d/13U-qr2hoFIREE4PSbEB1UmSBNsj2uyhOZGro2bVpoQo/edit?usp=sharing
search?q=canonical&btnI=lucky
Similarity Operator
Aliases Projection, Inner Product
A generalization of an operator that computes the similarity between a Model and a Feature.
How do we calculate the similarity between the model and input? Features found in practice may require different kinds of measures to determine similarity.
Similarity: $ R^n \times R^n \rightarrow R $
<Diagram>
In its more generalized sense, similarity is a measure of equivalence between two objects. For vectors, it is described as the inner product. For distributions, it can be described as the KL divergence between two distributions. There are many kinds of similarity measures, this is documented in a survey [Cha 2007]. Cha classifies similarity functions into eight different families.
Similarities are also tightly related to hashing functions. Hash algorithms be classified into serveral families: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving and quantization.
In its most generalized sense, a neuron can be thought of being composed of a similarity function between input and parameters, the resulting measure is fed through an activation function. The conventional neuron is an inner product between the input vectors and the internal weight vectors. This is equivalent to projecting the inputs to a random matrix of weight vectors.
The convolution can be considered as a generalization of a correlation operation. Convolution is equivalent to correlation when the kernel distribution is symmetric.
Shannon's entropy is a similarity measure equal to the KL divergence between the observed distribution and a random distribution.
Fisher's Information Matrix (FIM) is a multi-dimensional generalization of the similarity measure. The metric resides in a non-euclidean space.
Does the metric have to map to 1-dimensional space?
Does the metric have to be Euclidean?
What are the minimal characteristics for a metric?
Are neural embeddings favorable if the preserve a similarity measure.
Known Uses
Related Patterns
Pattern is related to the following Canonical Patterns:
Irreversibility and Merge form the essential mechanisms of any DL system.
Entropy is a global similarity measure that drives the evolution of the aggregate system. The local effect of a similarity operator is to neutral(?) to entropy.
Distance Measure generalizes the many ways we can define similarity beyond the vector dot product.
Random Projections shows how an collection of similarity operators can lead to a mapping that is able to preserve distance.
Clustering is a generalization of how space can be partitioned and at its core requires a heuristic for determining similarity.
Geometry provides a framework for understanding information spaces.
Random Orthogonal Initialization is a beneficial initialization that leads to good projections and clustering.
Dissipative Adaptation, where the energy absorption it equivalent to similarity matching.
Adversarial Features are a consequence of the use of a linear similarity measure.
Anti-causality expresses the direction of predictability that is a consequence of performing a similarity measure.
Pattern is cited in:
Canonical Patterns
Dissipative Adaptation
Information Geometry
Decision Operator
Merge Operator
Random Projections
Embedology
See Sung-Hyuk Cha, "Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions," International Journal of Mathematical Models and Methods in Applied Sciences, Volume 1 Issue 4, 2007, pp. 300-307 for a survey.[ii] The author identifies 45 PDF distance functions and classifies them into eight families: Lp Minkowski L1 intersection inner product fidelity (squared chord) squared L2 (χ2) Shannon's entropy combinations.
http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.154.8446 http://elki.dbs.ifi.lmu.de/wiki/DistanceFunctions http://tech.knime.org/wiki/distance-measure-developers-guide
http://turing.cs.washington.edu/papers/uai11-poon.pdf Sum-Product Networks: A New Deep Architecture
http://arxiv.org/pdf/1606.00185v1.pdf
A Survey on Learning to Hash
Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, and categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations.
http://psl.umiacs.umd.edu/files/broecheler-uai10.pdf Probabilistic Similarity Logic
http://arxiv.org/pdf/1606.06086v1.pdf Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity
http://arxiv.org/abs/1306.6709v4 A Survey on Metric Learning for Feature Vectors and Structured Data
https://arxiv.org/pdf/1602.01321.pdf A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks
http://openreview.net/pdf?id=r17RD2oxe DEEP NEURAL NETWORKS AND THE TREE OF LIFE
By applying the inner product similarity of the activation vectors at the last fully connected layer for different species, we can roughly build their tree of life. Our work provides a new perspective to the deep representation and sheds light on possible novel applications of deep representation to other areas like Bioinformatics.
http://www.skytree.net/2015/09/04/learning-with-similarity-search
Mercer kernels are essentially a generalization of the inner-product for any kind of data — they are symmetric though self-similarity may not be the maximum. They are quite popular in machine learning and Mercer kernels have been defined for text, graphs, time series, images.
https://arxiv.org/abs/1702.05870 Cosine Normalization: Using Cosine Similarity Instead of Dot Product in Neural Networks
To bound dot product and decrease the variance, we propose to use cosine similarity instead of dot product in neural networks, which we call cosine normalization. Our experiments show that cosine normalization in fully-connected neural networks notably reduces the test err with lower divergence, compared to other normalization techniques. Applied to convolutional networks, cosine normalization also significantly enhances the accuracy of classification and accelerates the training.
https://arxiv.org/abs/1708.00138 The differential geometry of perceptual similarity
Human similarity judgments are inconsistent with Euclidean, Hamming, Mahalanobis, and the majority of measures used in the extensive literatures on similarity and dissimilarity. From intrinsic properties of brain circuitry, we derive principles of perceptual metrics, showing their conformance to Riemannian geometry. As a demonstration of their utility, the perceptual metrics are shown to outperform JPEG compression. Unlike machine-learning approaches, the outperformance uses no statistics, and no learning. Beyond the incidental application to compression, the metrics offer broad explanatory accounts of empirical perceptual findings such as Tverskys triangle inequality violations, contradictory human judgments of identical stimuli such as speech sounds, and a broad range of other phenomena on percepts and concepts that may initially appear unrelated. The findings constitute a set of fundamental principles underlying perceptual similarity.
https://arxiv.org/abs/1410.5792v1 Generalized Compression Dictionary Distance as Universal Similarity Measure
https://arxiv.org/abs/1804.08071v1 Decoupled Networks
we first reparametrize the inner product to a decoupled form and then generalize it to the decoupled convolution operator which serves as the building block of our decoupled networks. We present several effective instances of the decoupled convolution operator. Each decoupled operator is well motivated and has an intuitive geometric interpretation. Based on these decoupled operators, we further propose to directly learn the operator from data.
? Decoupling the intra-class and interclass variation gives us the flexibility to design better models that are more suitable for a given ta
https://arxiv.org/pdf/1804.09458v1.pdf Dynamic Few-Shot Visual Learning without Forgetting
we propose a novel attention based few-shot classification weight generator as well as a cosine-similarity based ConvNet classifier. This allows to recognize in a unified way both novel and base categories and also leads to learn feature representations with better generalization capabilities
https://arxiv.org/abs/1712.07136 Low-Shot Learning with Imprinted Weights
by directly setting the final layer weights from novel training examples during low-shot learning. We call this process weight imprinting as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example.
https://arxiv.org/abs/1805.06576 A Spline Theory of Deep Networks (Extended Version)
We build a rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators. Our key result is that a large class of DNs can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal through which to view and analyze their inner workings. For instance, conditioned on the input signal, the output of a MASO DN can be written as a simple affine transformation of the input. This implies that a DN constructs a set of signal-dependent, class-specific templates against which the signal is compared via a simple inner product; we explore the links to the classical theory of optimal classification via matched filters and the effects of data memorization. Going further, we propose a simple penalty term that can be added to the cost function of any DN learning algorithm to force the templates to be orthogonal with each other; this leads to significantly improved classifi- cation performance and reduced overfitting with no change to the DN architecture. The spline partition of the input signal space that is implicitly induced by a MASO directly links DNs to the theory of vector quantization (VQ) and K-means clustering, which opens up new geometric avenue to study how DNs organize signals in a hierarchical fashion. To validate the utility of the VQ interpretation, we develop and validate a new distance metric for signals and images that quantifies the difference between their VQ encodings. (This paper is a significantly expanded version of a paper with the same title that will appear at ICML 2018.).
Orthogonality penalty a term that penalizes non-zero off-diagonal entries in the matrix leading to the new loss with extra penalty.
https://arxiv.org/abs/1807.02873v1 Separability is not the best goal for machine learning
https://arxiv.org/abs/1807.11440v1 Comparator Networks
(i) We propose a Deep Comparator Network (DCN) that can ingest a pair of sets (each may contain a variable number of images) as inputs, and compute a similarity between the pair–this involves attending to multiple discriminative local regions (landmarks), and comparing local descriptors between pairs of faces; (ii) To encourage high-quality representations for each set, internal competition is introduced for recalibration based on the landmark score; (iii) Inspired by image retrieval, a novel hard sample mining regime is proposed to control the sampling process, such that the DCN is complementary to the standard image classification models.
https://arxiv.org/abs/1808.00508v1 Neural Arithmetic Logic Units
Experiments show that NALU-enhanced neural networks can learn to track time, perform arithmetic over images of numbers, translate numerical language into real-valued scalars, execute computer code, and count objects in images. In contrast to conventional architectures, we obtain substantially better generalization both inside and outside of the range of numerical values encountered during training, often extrapolating orders of magnitude beyond trained numerical ranges.
https://www.quantamagazine.org/universal-method-to-sort-complex-information-found-20180813
https://arxiv.org/pdf/1808.07526.pdf Deep Neural Network Structures Solving Variational Inequalities∗
We propose a novel theoretical framework to investigate deep neural networks using the formalism of proximal fixed point methods for solving variational inequalities. We first show that almost all activation functions used in neural networks are actually proximity operators. This leads to an algorithmic model alternating firmly nonexpansive and linear operators. We derive new results on averaged operator iterations to establish the convergence of this model, and show that the limit of the resulting algorithm is a solution to a variational inequality
https://arxiv.org/abs/1810.02906v1 Network Distance Based on Laplacian Flows on Graphs
Our key insight is to define a distance based on the long term diffusion behavior of the whole network. We first introduce a dynamic system on graphs called Laplacian flow. Based on this Laplacian flow, a new version of diffusion distance between networks is proposed. We will demonstrate the utility of the distance and its advantage over various existing distances through explicit examples. The distance is also applied to subsequent learning tasks such as clustering network objects.
https://arxiv.org/pdf/1810.13337v1.pdf LEARNING TO REPRESENT EDITS
By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data.
https://arxiv.org/abs/1808.10584 Learning to Describe Differences Between Pairs of Similar Images
We collect a new dataset by crowd-sourcing difference descriptions for pairs of image frames extracted from video-surveillance footage.
similarity.txt | CommonCrawl |
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About
Duke Mathematical Journal
Duke Math. J.
Volume 53, Number 2 (1986), 315-332.
The Bergman space, the Bloch space, and commutators of multiplication operators
Sheldon Axler
More by Sheldon Axler
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
Article info and citation
Duke Math. J., Volume 53, Number 2 (1986), 315-332.
First available in Project Euclid: 20 February 2004
Permanent link to this document
https://projecteuclid.org/euclid.dmj/1077305045
doi:10.1215/S0012-7094-86-05320-2
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 47B35: Toeplitz operators, Hankel operators, Wiener-Hopf operators [See also 45P05, 47G10 for other integral operators; see also 32A25, 32M15]
Secondary: 30H05: Bounded analytic functions 46E20: Hilbert spaces of continuous, differentiable or analytic functions
Axler, Sheldon. The Bergman space, the Bloch space, and commutators of multiplication operators. Duke Math. J. 53 (1986), no. 2, 315--332. doi:10.1215/S0012-7094-86-05320-2. https://projecteuclid.org/euclid.dmj/1077305045
[1] H. Alexander, Projections of polynomial hulls, J. Functional Analysis 13 (1973), 13–19.
Mathematical Reviews (MathSciNet): MR49:3209
Zentralblatt MATH: 0256.32009
Digital Object Identifier: doi:10.1016/0022-1236(73)90063-3
[2] J. M. Anderson, J. Clunie, and Ch. Pommerenke, On Bloch functions and normal functions, J. Reine Angew. Math. 270 (1974), 12–37.
Mathematical Reviews (MathSciNet): MR50:13536
[3] Sheldon Axler, Hankel operators on Bergman spaces, Linear and Complex Analysis Problem Book eds. V. P. Havin, S. V. Hruščëv, and N. K. Nikol'skii, Lecture Notes in Mathematics, vol. 1043, Springer Verlag, Berlin, 1984, pp. 262–263.
[4] Sheldon Axler, Sun-Yung A. Chang, and Donald Sarason, Products of Toeplitz operators, Integral Equations Operator Theory 1 (1978), no. 3, 285–309.
Mathematical Reviews (MathSciNet): MR80d:47039
Digital Object Identifier: doi:10.1007/BF01682841
[5] C. A. Berger and B. I. Shaw, Intertwining, analytic structure, and the trace norm estimate, Proceedings of a Conference on Operator Theory (Dalhousie Univ., Halifax, N.S., 1973) ed. P. A. Fillmore, Springer, Berlin, 1973, 1–6, Lecture Notes in Math., Vol. 345.
[6] L. G. Brown, R. G. Douglas, and P. A. Fillmore, Unitary equivalence modulo the compact operators and extensions of $C\sp\ast$-algebras, Proceedings of a Conference on Operator Theory (Dalhousie Univ., Halifax, N.S., 1973) ed. P. A. Fillmore, Springer, Berlin, 1973, 58–128. Lecture Notes in Math., Vol. 345.
[7] R. R. Coifman, R. Rochberg, and Guido Weiss, Factorization theorems for Hardy spaces in several variables, Ann. of Math. (2) 103 (1976), no. 3, 611–635.
Mathematical Reviews (MathSciNet): MR54:843
Digital Object Identifier: doi:10.2307/1970954
JSTOR: links.jstor.org
[8] John B. Garnett, Bounded analytic functions, Pure and Applied Mathematics, vol. 96, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1981.
Mathematical Reviews (MathSciNet): MR83g:30037
[9] William W. Hastings, A Carleson measure theorem for Bergman spaces, Proc. Amer. Math. Soc. 52 (1975), 237–241.
[10] W. K. Hayman, Multivalent functions, Cambridge Tracts in Mathematics and Mathematical Physics, No. 48, Cambridge University Press, Cambridge, 1958.
[11] Svante Janson, Mean oscillation and commutators of singular integral operators, Ark. Mat. 16 (1978), no. 2, 263–270.
Mathematical Reviews (MathSciNet): MR80j:42034
[12] Svante Janson, Jaak Peetre, and Stephen Semmes, On the action of Hankel and Toeplitz operators on some function spaces, Duke Math. J. 51 (1984), no. 4, 937–958.
Mathematical Reviews (MathSciNet): MR86m:47033
Digital Object Identifier: doi:10.1215/S0012-7094-84-05142-1
Project Euclid: euclid.dmj/1077304102
[13] Thomas H. MacGregor, Length and area estimates for analytic functions, Michigan Math. J. 11 (1964), 317–320.
Digital Object Identifier: doi:10.1307/mmj/1028999183
Project Euclid: euclid.mmj/1028999183
[14] G. McDonald and C. Sundberg, Toeplitz operators on the disc, Indiana Univ. Math. J. 28 (1979), no. 4, 595–611.
Mathematical Reviews (MathSciNet): MR80h:47034
Digital Object Identifier: doi:10.1512/iumj.1979.28.28042
[15] V. V. Peller, Hankel operators of class $\mathfrakS_p$ and their applications (rational approximation, Gaussian processes, the problem of majorizing operators), Mathematics of the USSR-Sbornik 41 (1982), 443–479.
[16] Walter Rudin, Function theory in the unit ball of $\bf C\spn$, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], vol. 241, Springer-Verlag, New York, 1980.
Mathematical Reviews (MathSciNet): MR82i:32002
[17] Donald Sarason, Function theory on the unit circle, Virginia Polytechnic Institute and State University Department of Mathematics, Blacksburg, Va., 1978.
[18] Donald Sarason, Blaschke products in $\beta_0$, Linear and Complex Analysis Problem Book eds. V. P. Havin, S. V. Hruščëv and N. K. Nikol'skiĭ, Lecture Notes in Mathematics, vol. 1043, Springer-Verlag, Berlin, 1984, pp. 337–338.
[19] Richard M. Timoney, A necessary and sufficient condition for Bloch functions, Proc. Amer. Math. Soc. 71 (1978), no. 2, 263–266.
[20] Richard M. Timoney, Bloch functions in several complex variables. I, Bull. London Math. Soc. 12 (1980), no. 4, 241–267.
Mathematical Reviews (MathSciNet): MR83b:32004
Digital Object Identifier: doi:10.1112/blms/12.4.241
[21] Akihito Uchiyama, On the compactness of operators of Hankel type, Tôhoku Math. J. (2) 30 (1978), no. 1, 163–171.
Digital Object Identifier: doi:10.2748/tmj/1178230105
Project Euclid: euclid.tmj/1178230105
[22] Shinji Yamashita, Criteria for functions to be Bloch, Bull. Austral. Math. Soc. 21 (1980), no. 2, 223–227.
Digital Object Identifier: doi:10.1017/S0004972700006043
Purchase print copies of recent issues
DMJ 100
Email RSS ToC RSS Article
Turn Off MathJax
What is MathJax?
Intertwining relations for Volterra operators on the Bergman space
Tong, Ce-Zhong and Zhou, Ze-Hua, Illinois Journal of Mathematics, 2013
Hilbert matrix on Bergman spaces
Diamantopoulos, E., Illinois Journal of Mathematics, 2004
Superposition operators between the Bloch space and Bergman spaces
Álvarez, Venancio, Márquez, M. Auxiliadora, and Vukotić, Dragan, Arkiv för Matematik, 2004
Density of algebras generated by Toeplitz operators on Bergman spaces
Engliš, Miroslav, Arkiv för Matematik, 1992
Toeplitz operators on weighted pluriharmonic Bergman space
Kong, Linghui and Lu, Yufeng, Banach Journal of Mathematical Analysis, 2018
The compactness of a class of radial operators on weighted Bergman spaces
Li, Yucheng, Wang, Maofa, and Lan, Wenhua, Advances in Operator Theory, 2018
Commutative Algebras of Toeplitz Operators on the Pluriharmonic Bergman Space
Loaiza, M. and Lozano, C., Communications in Mathematical Analysis, 2014
Approximation and symbolic calculus for Toeplitz algebras on the Bergman space
Suárez, Daniel, Revista Matemática Iberoamericana, 2004
ON A CLASS OF OPERATORS FROM WEIGHTED BERGMAN SPACES TO SOME SPACES OF ANALYTIC FUNCTIONS
Jiang, Zhi Jie, Taiwanese Journal of Mathematics, 2011
Representation theorem for harmonic Bergman and Bloch functions
Tanaka, Kiyoki, Osaka Journal of Mathematics, 2013
euclid.dmj/1077305045 | CommonCrawl |
arxiv.org
arxiv-sanity.com
Banach Wasserstein GAN
Jonas Adler and Sebastian Lunz
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.CV, cs.LG, math.FA
First published: 2018/06/18 (3 years ago)
Abstract: Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions. The Wasserstein metric used in WGANs is based on a notion of distance between individual images, which induces a notion of distance between probability distributions of images. So far the community has considered $\ell^2$ as the underlying distance. We generalize the theory of WGAN with gradient penalty to Banach spaces, allowing practitioners to select the features to emphasize in the generator. We further discuss the effect of some particular choices of underlying norms, focusing on Sobolev norms. Finally, we demonstrate the impact of the choice of norm on model performance and show state-of-the-art inception scores for non-progressive growing GANs on CIFAR-10.
[link] Summary by Artëm Sobolev 3 years ago
The paper extends the [WGAN](http://www.shortscience.org/paper?bibtexKey=journals/corr/1701.07875) paper by replacing the L2 norm in the transportation cost by some other metric $d(x, y)$. By following the same reasoning as in the WGAN paper one arrives at a dual optimization problem similar to the WGAN's one except that the critic $f$ has to be 1-Lipschitz w.r.t. a given norm (rather than L2). This, in turn, means that critic's gradient (w.r.t. input $x$) has to be bounded in the dual norm (only in Banach spaces, hence the name). Authors build upon the [WGAN-GP](http://www.shortscience.org/paper?bibtexKey=journals/corr/1704.00028) to incorporate similar gradient penalty term to force critic's constraint.
In particular authors choose [Sobolev norm](https://en.wikipedia.org/wiki/Sobolev_space#Multidimensional_case):
||f||_{W^{s,p}} = \left( \int \sum_{k=0}^s ||\nabla^k f(x)||_{L_p}^p dx \right)^{1 / p}
This norm is chosen because it not only forces pixel values to be close, but also the gradients to be close as well. The gradients are small when you have smooth texture, and big on the edges -- so this loss can regulate how much you care about the edges. Alternatively, you could express the same norm by first transforming the $f$ using the Fourier Transform, then multiplying the result by $1 + ||x||_{L_2}^2$ pointwise, and then transforming it back and integrating over the whole space:
||f||_{W^{s,p}} = \left( \int \left( \mathcal{F}^{-1} \left[ (1 + ||x||_{L_2}^2)^{s/2} \mathcal{F}[f] (x) \right] (x) \right)^p dx \right)^{1 / p}
Here $f(x)$ would be image pixels intensities, and $x$ would be image coordinates, so $\nabla^k f(x)$ would be spatial gradient -- the one you don't have access to, and it's a bit hard to estimate one with finite differences, so the authors go for the second -- fourier -- option. Luckily, a DFT transform is just a linear operator, and fast implementations exists, so you can backpropagate through it (TensorFlow already includes tf.spectal)
Authors perform experiments on CIFAR and report state-of-the-art non-progressive results in terms of Inception Score (though not beating SNGANs by a statistically significant margin). The samples they present, however, are too small to tell if the network really cared about the edges. | CommonCrawl |
Proof of negation and proof by contradiction
March 29, 2010 Logic, TutorialAndrej Bauer
I am discovering that mathematicians cannot tell the difference between "proof by contradiction" and "proof of negation". This is so for good reasons, but conflation of different kinds of proofs is bad mental hygiene which leads to bad teaching practice and confusion. For reference, here is a short explanation of the difference between proof of negation and proof by contradiction.
By the way, this post is something I have been meaning to write for a while. It was finally prompted by Timothy Gowers's blog post "When is proof by contradiction necessary?" in which everything seems to be called "proof by contradiction".
As far as I can tell, "proof by contradiction" among ordinary mathematicians means any proof which starts with "Suppose …" and ends with a contradiction. But two kinds of proofs are like that:
Proof of negation is an inference rule which explains how to prove a negation:
To prove $\lnot \phi$, assume $\phi$ and derive absurdity.
The rule for proving negation is the same classically and intuitionistically. I mention this because I have met ordinary mathematicians who think intuitionistic proofs are never allowed to reach an absurdity.
Proof by contradiction, or reductio ad absurdum, is a different kind of animal. As a reasoning principle it says:
To prove $\phi$, assume $\lnot \phi$ and derive absurdity.
As a proposition the principle is written $\lnot \lnot \phi \Rightarrow \phi$, which can be proved from the law of excluded middle (and is in fact equivalent to it). In intuitionistic logic this is not a generally valid principle.
Admittedly, the two reasoning principles look very similar. A classical mathematician will quickly remark that we can get either of the two principles from the other by plugging in $\lnot \phi$ and cancelling the double negation in $\lnot \lnot \phi$ to get back to $\phi$. Yes indeed, but the cancellation of double negation is precisely the reasoning principle we are trying to get. These really are different.
I blame the general confusion on the fact that an informal proof of negation looks almost the same as an informal proof by contradiction. In order to prove $\lnot \phi$ a mathematician will typically write:
"Suppose $\phi$. Then … bla … bla … bla, which is a contradiction. QED."
In order to prove $\phi$ by contradiction a mathematician will typically write:
"Suppose $\lnot \phi$. Then … bla … bla … bla, which is a contradiction. QED."
The difference will be further obscured because the text will typically state $\lnot \phi$ in an equivalent form with negation pushed inwards. That is, if $\phi$ is something like $\exists x, \forall y, f(y) < x$ and the proof goes by contradiction then the opening statement will be "Suppose for every $x$ there were a $y$ such that $f(y) \geq x$." With such "optimizations" we really cannot tell what is going on by looking just at the proof. We have to take into account the surrounding context (such as the original statement being proved).
A second good reason for the confusion is the fact that both proof principles feel the same when we try to use them. In both cases we assume something believed to be false and then we hunt down a contradiction. The difference in placement of negations is not easily appreciated by classical mathematicians because their brains automagically cancel out double negations, just like good students automatically cancel out double negation signs.
Keeping all this in mind, let us look at Timothy Gower's blog examples.
Irrationality of $\sqrt{2}$
The first example is irrationality of $\sqrt{2}$. Because "$\sqrt{2}$ is irrational" is by definition the same as "$\sqrt{2}$ is not rational" we are clearly talking about a proof of negation. There is a theorem about normal forms of proofs in intuitionistic logic which tells us that every proof of a negation can be rearranged so that it ends with the inference rule cited above. In this sense the method of proof "assume $\sqrt{2}$ is rational, …, contradiction" is unavoidable.
I want to make two further remarks. The first one is that the usual proof of irrationality of $\sqrt{2}$ is intuitionistically valid. Let me spell it out:
Theorem: $\sqrt{2}$ is not rational.
Proof. Suppose $\sqrt{2}$ were equal to a fraction $a/b$ with $a$ and $b$ relatively prime. Then we would get $a^2 = 2 b^2$, hence $a^2$ is even and so is $a$. Write $a = 2 c$ and plug it back in to get $2 c^2 = b^2$, from which we conclude that $b$ is even as well. This is a contradiction since $a$ and $b$ were assumed to be relatively prime. QED.
No proof by contradiction here!
My second remark is that this particular example is perhaps not good for discussing proofs of negation because it reduces to inequality of natural numbers, which is a decidable property. That is, as far as intuitionistic logic is concerned, equality and inequality of natural numbers are both equally "positive" relations. This is reflected in various variants of the proof given by Gowers on his blog, some of which are "positive" in nature.
The situation with reals is different. There we could define the so-called apartness relation $x \# y$ to mean $x < y \lor y < x$. The negation of apartness is equality, but the negation of equality is not apartness, at least not intuitionistically (classically of course this whole discussion is a triviality). A proof of inequality $x \neq y$ of real numbers $x$ and $y$ may thus proceed in two ways:
The direct way: assume $x = y$ and derive absurdity
Via apartness: prove $x \# y$ and conclude that $x \neq y$
Note that the proof of $x \# y \Rightarrow x \neq y$ still involves the usual proof of negation in which we assume $x \# y \land x = y$ and derive absurdity.
A continuous map on $[0,1]$ is bounded
The second example is the statement that a continuous map $f : [0,1] \to \mathbb{R}$ is bounded. The direct proof uses the Heine-Borel property of the closed interval to find a finite cover of $[0,1]$ such that $f$ is bounded on each element of the cover. There is also a proof by contradiction which goes as follows:
Suppose $f$ were unbounded. Then we could find a sequence $(x_n)_n$ in $[0,1]$ such that the sequence $(f(x_n))_n$ is increasing and unbounded (this uses Countable Choice, by the way). By Bolzano-Weierstras there is a convergent subsequence $(y_n)_n$ of $(x_n)_n$. Because $f$ is continuous the sequence $(f(y_n))_n$ is convergent, which is impossible because it is a subsequence of the increasing and unbounded sequence $(f(x_n))_n$. QED.
Can we turn this proof into one that does not use contradiction (but still uses Bolzano-Weierstrass)? Constructive mathematicians are well versed in doing such things. Essentially we have to look at the supremum of $f$, like Timothy Gowers does, but without actually referring to it. The following proof is constructive and direct.
Theorem: If every sequence in a separable space $X$ has a convergent subsequence, then every continuous real map on $X$ is bounded.
Proof. Let $(x_n)_n$ be a dense sequence in $X$ and $f : X \to \mathbb{R}$ continuous. For every $n$ there is $k$ such that $f(x_k) \geq \max(f(x_1), …, f(x_n)) – 1$. By Countable Choice there is a sequence $(k_n)_n$ such that $f(x_{k_n}) \geq \max(f(x_1), …, f(x_n)) – 1$ for every $n$. Let $(z_n)_n$ be a convergent subsequence of $(x_{k_n})_n$ and let $z$ be its limit. Because $f$ is continuous there is $d > 0$ such that $f(z_n) \leq f(z) + d$ for all $n$. Consider any $x \in X$. Because $f$ is continuous and $(x_n)_n$ is dense there is $x_i$ such that $f(x) \leq f(x_i) + 1$. Observe that there is $j$ such that $f(x_{k_i}) – 1 \leq f(z_j)$. Now we get $$f(x) \leq f(x_i) + 1 \leq \max(f(x_1), …, f(x_i)) + 1 \leq f(x_{k_i}) + 2 \leq f(z_j) + 3 < f(z) + d + 3.$$ We have shown that $f(z) + d + 3$ is an upper bound for $f$. QED.
I am pretty sure with a bit more work we could show that $f$ attains its supremum, and in fact this must have been proved by someone constructively.
The moral of the story is: proofs by contradiction can often be avoided, proofs of negation generally cannot, and if you think they are the same thing, you will be confused.
← A new style for the blog Random art in Python →
24 thoughts on "Proof of negation and proof by contradiction"
Good post, Andrej. I was thinking of responding to Gowers myself, but I could not have done it as well as you've just done. Thanks!
A related point, which is implicit in your discussion, is that constructively there are more distinctions to be made than classically, so that one can often "constructivize" a classical theorem in many different ways. I think it was Bishop who said, heuristically, that one can either use the same premises and derive a constructively weaker conclusion, or strengthen the premises and derive the same conclusion. I think it's not so simple as all that, but it's a helpful guide. An example that arises in basic algorithms is what it means for a finite graph to be non-planar: it could be that it is not planar or it could be that one can embed K_5 or K_{3,3} in it. Classically it's the same thing, but constructively they have content. Richer examples abound.
Can you say more about the planarity examples? I would naively assume that with good definitions the theorem must be the same. For example, I would take graphs to be decidable (decidable equality of vertices and edges, decidable neighborhood relation) and embeddings into the plane "nice" (uniformly continuous should do). Can't we recover the original theorem "planarity is equivalent to embedding `K_5` or `K_{3,3}`" under these conditions?
Peter LeFanu Lumsdaine says:
The classical theorem, as I remember, is "non-planarity is equivalent to embedding `K_5` or `K_{3,3}`" — an equivalence between a negative and a positive condition.
The most natural ways to think of this classically are surely as `(not P) => E`, `P <=> not E`, or the conjunction of the two. But each of these implies that at least one of `P`, `E` is (equivalent to) a negation, and hence equivalent to its own double negation. But I don't think we should be able to get either of those here.
If the property `E`, "embeddability of `K_5` or `K_{3,3}`" is equivalent to its double negation, then I think excluded middle follows, by constructing (given a statement `p`) a graph which has a `K_5` if `p` holds and a `K_{3,3}` if `p` fails. (More concretely, one can give a counterexample graph in presheaves on the poset "V".) And I suspect something similar should be do-able for planarity, though I don't immediately see it…
I can believe that allowing "fuzzy" graphs gets you into trouble. But I would say that the correct constructive definition of finite structures from discrete mathematics should require decidability all around (decidable equality, decidable relations, decidable subsets, etc). So, essentially discrete math can be coded into arithmetic.
It's quite possible that you're making a distinction where none exists: what classical mathematicians call "proof by contradiction" is exactly what constructivists call "proof of a negation". The only difference is that constructivists encode this distinction in the statement of the thesis by choosing either the positive or the negative form (and considering the latter to be logically weaker than the former) whereas classical mathematicians consider positive and negative forms to be entirely interchangeable.
It should be noted that one can "refutivize" a proof of a negative statement (turn it into a constructive refutation) just as easily as one can constructivize a classical proof by contradiction.
@guest: I am not inventing anything here, this is all standard logic. Please consult a textbook on logic because what you are saying is wrong. Even classical mathematicians do not identify a proposition with its double negation, they just take them to be equivalent. And even in classical logic this equivalence is proved from the Law of Excluded Middle.
Please consult a textbook on logic because what you are saying is wrong.
It is very possibly 'wrong' from a proof-theoretical point of view, but most classical mathematicians do not reason like this; they take a denotational and model-theoretic perspective where a proposition stands for either truth or falsity, so identifying equivalent propositions is entirely justified. And even proof-theoretically, any proposition can be replaced by an equivalent statement plus a proof that the latter implies the former; and if the implication is obvious enough it will be elided in an informal proof.
The moral: minimizing proof by contradiction is a good policy, and this extends to searching for positive refutations of 'negative' statements. But sometimes avoiding proof by contradiction is impossible or there's no compelling case for a direct proof, and here constructive mathematicians must either use negation signs, or (implicitly) punt to classical mathematicians and translate classical math to the negative fragment of constructive logic.
You are confusing propositions with their meaning and provability with validity. There might be some mathematicians who think that their perspective is "model-theoretic", but they still write proofs in their papers, don't they? You cannot avoid proofs in mathematics, no matter what your philosophical point is. Identification of equivalent propositions is justified proof-theoretically as well as semantically. It is easy to prove a meta-theorem which says that you can always subtitute a propostion for an equivalent one, no need to go to semantics.
My point is that such identifications, while useful most of the time, are the source of confusion to many mathematicians who cannot tell the difference between a proof of negation and a proof by contradiction. You seem to be one of them, unfortunately.
Sridhar Ramesh says:
I am sympathetic to saying that ordinary language "proof by contradiction" might as well refer to "proof of negation", and that it's just that classical mathematics is fraught with following this up immediately with double negation elimination (in which case, it is what was called "proof by contradiction" in this post).
I am even sympathetic to saying that, in classical mathematics, propositions are even often considered to be syntactically identified with their double negations; sure, you can construct some formal syntax in which the two are syntactically distinct (and, indeed, this is the way we logicians normally analyze them), but just as well, you could construct a formal syntax in which they aren't, and the everyday non-logician mathematician isn't particularly concerned with formal syntax anyway. (Granted, if one is accustomed to thinking of a statement as even syntactically identical to its double negation, it becomes that much more difficult to abandon this idea when moving away from classical mathematics into the intuitionistic or what have you, but so it is when moving between different logical systems in many ways anyway…)
(That is, I am all for calling the distinctively classical reasoning principle "double negation elimination" instead of "proof by contradiction", using the latter term instead for the "proof of negation" principle common to both classical and intuitionistic logic by which negation is introduced. Of course, in actual practice, people use the term "proof by contradiction" in a multitude of differing ways, hence the confusion you remark upon…)
Doug Spoonwood says:
"It is very possibly 'wrong' from a proof-theoretical point of view, but most classical mathematicians do not reason like this; they take a denotational and model-theoretic perspective where a proposition stands for either truth or falsity, so identifying equivalent propositions is entirely justified."
The perspective here presumes the law of the excluded middle. This only holds on the view that all propositions of mathematics are either true or false and nothing else. But what about propositions like "infinite series are divergent"? You might wish to believe it false, since we do know of convergent infinite series. But, the proposition, as intended by the writer or speaker, may or may not say "*all* infinite series are divergent." It might say "most infinite series are divergent" or it might say "usually infinite series are divergent." Or consider a proposition like "prime numbers are odd." If the writer means "all prime numbers are odd", then the proposition is false. However, if the writer means "almost all prime numbers are odd", then the proposition is true.
@Doug: even though I agree with you that the quoted passage is confused, I am afraid your reply is equally confused, or even more. It is clear that the author of the passage meant "propositional function" when he said "proposition". Your examples are about making the distinction between a sentence (a proposition without free parameters) and a proposition (with free parameters), or about reading sense into vague statements. But the passage is about neither of those.
Zach Norwood says:
@Andrej: Is a proof by negation valid for the intuitionist because it can easily be turned into a proof by contraposition? (For example, if, in trying to prove $\lnot\phi$, I prove $\phi\Rightarrow b$, where $\lnot b$ is known to be true, then I've established the contrapositive $\lnot b\Rightarrow\lnot\phi$, which establishes $\lnot\phi$.) In fact, is the formalization of such a proof actually a proof of the contrapositive, at least as the intuitionist requires the proof to be formalized? Playing the same game with proofs by contradiction (as opposed to proofs by negation) requires excluded middle, of course (as you say).
@Zach: you have to be careful there. Intuitionistically $a \Rightarrow b$ implies $\lnot b \Rightarrow \lnot a$, but in general the other direction does not hold (and in fact, if it does then we get classical logic). However, if I remember correctly, as soon as either $a$ or $b$ is $\lnot\lnot$-stable (equivalent to its double negation) then we do get $(a \Rightarrow b) \iff (\lnot b \Rightarrow \lnot a)$. In the case of $\lnot \phi$, which is just an abbreviation for $\phi \Rightarrow \bot$, we do have $\lnot\lnot$-stability of $\bot$, so indeed we could prove the contrapositive if we wished. But the contrapositive is $\lnot \bot \Rightarrow \lnot \phi$, which is just $\lnot \phi$ again, so we end up where we started.
Robert Furber says:
I am pretty sure with a bit more work we could show that f attains its supremum, and in fact this must have been proved by someone constructively.
Unfortunately not. Recall that it is not possible to prove the intermediate value theorem constructively (see page 5 of Bishop's book, the introduction section, send me a mail if you don't have the book and want me to just scan that part for you). However, a method to prove your statement can be adapted to prove the intermediate value theorem:
Suppose that $f$ is a function $[0,1] \to \mathbb{R}$, and $f(0) < 0 < f(1)$. Let $F(x) = \int_0^x -f(y) dy$. Use your procedure to find the $x$ in $[0,1]$ where $F$ attains its maximum. By the fundamental theorem of calculus $F$ is differentiable and $F' = -f$, and since $x$ is where $F$ has a maximum, $F'(x) = 0$, so $-f(x) = 0$, so $f(x) = 0$, and the intermediate value has been found. [End proof] Therefore no such method can exist (a proof of negation). Bishop proves the necessary calculus theorems in his book. Interestingly, Bishop shows how to compute the least upper bound of a continuous function on an interval, but doesn't mention that it can't be shown to take that value.
@Robert: I took the liberty to edit your comment to insert LaTeX and also to fix the proof a bit (I think HTML ate a part of it). Please let me know if I did injustice to your proof. (Also, I think it goes without saying that $f$ is assumed to be continuous in Bishop sense, i.e., uniformly continuous on every closed interval, otherwise how do you define the integral?)
You have shown, I think, that if every Bishop-continuous map on $[0,1]$ attains its maximum then the Intermediate Value Theorem holds.
But my claim (specialized to $X = [0,1]$) was: if every sequence in $[0,1]$ has a convergent subsequence, then every continuous map on $[0,1]$ attains its maximum.
I am very sorry, but how does what you have shown invalidate what I claimed? If we put both things together we only get: "If every sequence in $[0,1]$ has a convergent subsequence then the Intermediate Value Theorem holds". This is not in contradiction with possible failure of the Intermediate Value Theorem in Bishop-style mathematics. In fact, it is easy to show Bishop-style that the claim actually holds. For suppose every sequence in $[0,1]$ has a convergent subsequence and $f : [0,1] \to \mathbb{R}$ is continuous and $f(0) < 0 < f(1)$. We can find a sequence $(x_n)_n$ such that $|f(x_n)| \leq 2^{-n}$. Take a convergent subsequence of $(x_n)$, and its limit is where $f$ has a zero. It is an exercise to show that if every sequence of $[0,1]$ has a convergent subsequence then every continuous map on $[0,1]$ attains its maximum.
You are quite right, of course. I got confused because I didn't realise you had assumed something that was Bishop-false. This must be why Bishop's definition of compact isn't any of the usual ones, it's totally bounded and complete, and applies only to metric spaces.
Timothy Swan says:
Hi, I like how you are interested in intuitionistic and univalent constructions. I am wondering, however, how you constructed root 2 = a/b as part of your proof. There is no proof that it is correct, so you cannot use it as a statement, even for assumption, right? At least in univalent type theory, your proof could not provide the value of root 2 since there is a law of excluded middle used to define the irrational. At some point in evaluation you state that the negation of the rationals within the reals is the irrationals, which is not intuitionistic.
@Timothy Swan: I am not sure what bothers you, but as far as I can tell, you think that whenever I use negation then I am automatically classical. This is false. Negation is part of constructive mathematics (as well as univalent foundations). There is no "law of excluded middle" involved in the definition of irrationals. The irrationals are defined as the set (or type)
$$\{ x \in \mathbb{R} \mid \lnot \exists a, b \in \mathbb{Z} \,.\, b \neq 0 \land x = a/b\}.$$
There are negations in this definition but no law of excluded middle. The law of excluded middle is something that appears in proofs. It does not appear in definitions. The definition of irrational numbers is constructively valid.
Furthermore, we can completely avoid talking about irrational numbers: instead of saying "$\sqrt{2}$ is not rational" we can say "there is no rational number whose square equals $2$". The proof then proceeds just as above.
Pingback: Substitute Proof by Contradiction/Negation with Direct Proof or Proof by Contrapositive - MathHub
Arik says:
I encounter the following problem: students call proof by negation (meaning reduction to contradiction) when in order to prove "if p then q" they prove that "if not q then not p". The latter uses clearly the tautology that " 'if p then q' iff 'if not q then not p' ". This type of proof is called contrapositive but students are not aware to the distinction.
@Arik: well, if you're their teacher perhaps you can teach them about the distinction.
Rafael Castro says:
Is interesting to see that if you search in Wikipedia for "proof by contradiction" and see the examples, you will find exactly the one about $\sqrt{2}$ . And the other ones seems to be proof of negation too.
Seeing this I'm asking myself If I know anything that really is proof by contradiction, in the sense that there is no translation to proof of negation. Or even there is something that really only can be proved using proof by contradiction and has been showed that is the only way (seems to be a really hard thing to show).
I know some proofs that uses LEM, but I don't see how can it be translate to a proof by contradiction.
@Rafael just prove a generic LEM instance without using LEM. This should be easy enough, since it doesn't distract with irrelevant specifics. A translation of arbitrary proofs that use LEM should then be mechanic. Also, since we know that generic LEM instances aren't provable intuitionistically (by a perhaps not entirely trivial semantic argument), you would have a kind of example that is in a sense "really only provable using proof by contradiction". | CommonCrawl |
Math / 3rd Grade / Unit 6: Fractions
Students deepen their understanding of halves, thirds, and fourths to understand fractions as equal partitions of a whole, and are exposed to additional fractional units such as fifths, sixths, eighths, ninths, and tenths.
Unit Practice
In Unit 6, students extend and deepen Grade 1 work with understanding halves and fourths/quarters (1.G.3) as well as Grade 2 practice with equal shares of halves, thirds, and fourths (2.G.3) to understanding fractions as numbers. Their knowledge becomes more formal as they work with area models and the number line. Throughout the unit, students have multiple experiences working with the Grade 3 specified fractional units of halves, thirds, fourths, sixths, and eighths. To build flexible thinking about fractions, students are exposed to additional fractional units such as fifths, ninths, and tenths.
Students begin the unit by partitioning different models (like area models and fraction strips) of wholes into equal parts (e.g., concrete fraction strips and pictorial area models) (3.G.2), allowing this supporting cluster content to enhance the major work of Grade 3 with fractions. They identify and count equal parts as halves, fourths, thirds, sixths, and eighths in unit form before introduction to the unit fraction $$\frac{1}{b}$$ (3.NF.1). Then, they make copies of unit fractions to build non-unit fractions, understanding unit fractions as the basic building blocks that compose other fractions (3.NF.1). Next, students transfer their work to the number line. They begin by using the interval from 0 to 1 as the whole and then extend to mark fractions beyond a whole. Noticing that some fractions with different units are placed at the exact same point on the number line, they come to understand equivalent fractions (3.NF.3a). Students express whole numbers as fractions and recognize fractions are equivalent to whole numbers. Next, students use their understanding of the number of units and the size of each unit to compare fractions in simple cases, such as when dealing with common numerators or common denominators by reasoning about their size (3.NF.3d). Lastly, students measure lengths with fractional units and use data generated by measuring multiple objects to create line plots (3.MD.4). Lastly, students "use their developing knowledge of fractions and number lines to extend their work from the previous grade by working with measurement data involving fractional measurement values" (MD Progression, p. 10), and use that measurement data to create line plots (3.MD.4), thus using this supporting cluster work to enhance the major work of fractions.
This unit affords ample opportunity for students to engage with the Standards for Mathematical Practice. Students will develop an extensive toolbox of ways to model fractions, including area models, tape diagrams, and number lines (MP.5), choosing one model over another to represent a problem based on its inherent advantages and disadvantages. Students construct viable arguments and critique the reasoning of others as they explain why fractions are equivalent and justify their conclusions of a comparison with a visual fraction model (MP.3). They attend to precision as they come to more deeply understand what is meant by equal parts, and being sure to specify the whole when discussing equivalence and comparison (MP.6). Lastly, in the context of line plots, "measuring and recording data require attention to precision (MP.6)" (MD Progression, p. 3).
Unfortunately, "the topic of fractions is where students often give up trying to understand mathematics and instead resort to rules" (Van de Walle, p. 203). Thus, this unit places a strong emphasis on developing conceptual understanding of fractions, using the number line to represent fractions and to aid in students' understanding of fractions as numbers. With this strong foundation, students will operate on fractions in Grades 4 and 5 (4.NF.3—4, 5.NF.1—7) and apply this understanding in a variety of contexts, such as proportional reasoning in middle school and interpreting functions in high school, among many others.
Have students complete the Mid-Unit Assessment after lesson 14.
Intellectual Prep for All Units
Read and annotate "Unit Summary" and "Essential Understandings" portion of the unit plan.
Do all the Target Tasks and annotate them with the "Unit Summary" and "Essential Understandings" in mind.
Take the Post-Unit Assessment.
Read pp. 7–9 of Progressions for the Common Core State Standards in Mathematics, Number and Operations - Fractions, 3-5
When referred to fractions throughout Unit 6, use unit language as opposed to "out of" language (e.g., $$\frac{3}{4} $$ should be described as "3 fourths" rather than "3 out of 4"). To understand why, read the blog post, Say What You Mean and Mean What You Say by William McCallum on Illustrative Mathematics.
Area model
Example: The following shape represents 1 whole. $$\frac{1}{6}$$ of it is shaded.
Fraction strip/tape diagram
Example: The point on the number line below is located at $$\frac{1}{6}$$.
Line plot
"Unit fractions [are] basic building blocks for fractions, in the same sense that the number 1 is the basic building block of the whole numbers. Just as every whole number can be obtained by combining ones, every fraction can be obtained by combining copies of one unit fraction" (NF Progression, p. 7).
Number line conventions that exist for whole numbers also apply to number lines that represent fractions. In other words, just as 5 is the point on the number line reached by marking off 5 times the length of the unit interval from 0, so $$\frac{5}{3}$$ is the point obtained in the same way using a different interval as the unit of measurement, namely the interval from 0 to $$\frac{1}{3}$$" (NF Progression, p. 8).
With both equivalence and comparison of fractions, it is important to make sure that each fraction refers to the same whole. For example, it is possible for a fourth of a large pizza to be greater than half of a small pizza.
One can compare fractions with the same denominator by thinking about the number of units. For example, just as 5 inches is greater than 3 inches because it has a greater measurement in the same size, 5 eighths is greater than 3 eighths because it is made of more unit fractions of the same size.
One can compare fractions with the same numerator by thinking about the size of the unit. For example, just as 2 inches is greater than 2 centimeters because inches are larger than centimeters, 2 thirds is greater than 2 fifths because thirds are a larger unit than fifths.
The numerical axis of a line plot is simply a segment of a number line. Further, "the number line diagram in a line plot corresponds to the scale on the measurement tool used to generate the data" (MD Progression, p 3).
The materials, representations, and tools teachers and students will need for this unit
Straightedge (1 per student) — This can be any tool used to draw a straight line, e.g., a straightedge, a ruler, etc.
Strips of paper (5 per student) — These should measure about 5.5" by 1"
Fraction Cards (without pictures) (1 per pair of students)
Fraction Cards (with pictures) (1 per pair of students)
Template: Comparing Fractions Symbols (1 per pair of students)
Ruler (1 per every 6 students)
Ruler (1 per student) — These should measure to the nearest quarter inch and ideally the 0 inch mark is not flush with the end of the ruler.
Rectangular pieces of paper (6 per student) — These can be any size but must be the same size for all students. You could use a quarter piece of paper or an unlined index card for each.
Optional: Square inch tiles (at least 4 per student) — You could also provide square inches cut out from Template: Square Inch Grid if you do not have enough square inch tiles, which should be cut into pieces before the lesson. See Lesson 5 for more information.
Optional: Rectangular piece of paper (1 per student) — These should measure about 0.5" by 1.5"
Optional: Template: Equal Shares (1 per student)
fractional unit
numerator
unit fraction
unit form
To see all the vocabulary for Unit 6, view our 3rd Grade Vocabulary Glossary.
Word Problems and Fluency Activities
Access daily word problem practice and our content-aligned fluency activities created to help students strengthen their application and fluency skills.
Topic A: Understanding Unit Fractions and Building Non-Unit Fractions
Partition a whole into equal parts using concrete area models, identifying fractional units.
3.G.A.2 3.NF.A.1
Partition a whole into equal parts using concrete tape diagrams (i.e., fraction strips), identifying and writing unit fractions in fraction notation.
Partition a whole into equal parts, identifying and counting unit fractions using pictorial area models and tape diagrams, identifying the unit fraction numerically.
Partition a whole into equal parts using pictorial area models and tape diagrams, identifying and writing non-unit fractions in fraction notation.
3.NF.A.1
Identify the shaded and unshaded parts of a whole.
Build and write non-unit fractions greater than one whole from unit fractions.
Identify fractions of a whole that is not partitioned into equal parts.
Draw the whole when given the unit fraction.
Identify a shaded fractional part in different ways, depending on the designation of the whole.
Topic B: Fractions on a Number Line
Partition a number line from 0 to 1 into fractional units.
Place any fraction on a number line with endpoints 0 and 1.
Objective: Place any fraction on a number line with endpoints 0 and another whole number greater than 1.
Place any fraction on a number line with endpoints greater than 0.
3.NF.A.2 3.NF.A.3.C
Place various fractions on a number line where the given interval is not a whole.
3.NF.A.2 3.NF.A.3.D
Topic C: Equivalent Fractions
Understand two fractions as equivalent if they are the same point on a number line referring to the same whole. Use this understanding to generate simple equivalent fractions.
3.NF.A.3.A 3.NF.A.3.B
Understand two fractions as equivalent if they are the same sized pieces of the same sized wholes, though not necessarily the same shape. Use this understanding to generate simple equivalent fractions.
Express the whole number 1 as fractions.
3.NF.A.3.C
Express whole numbers greater than 1 as fractions.
Express whole numbers as fractions, and recognize fractions that are equivalent to whole numbers.
Explain equivalence by manipulating units and reasoning about their size.
3.NF.A.3.A 3.NF.A.3.B 3.NF.A.3.C
Topic D: Comparing Fractions
Compare unit fractions (a unique case of fractions with the same numerators) by reasoning about the size of their units. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with the symbols >, =, or <.
3.NF.A.3.D
Compare fractions with the same numerators by reasoning about the size of their units. Record the results of comparisons with the symbols >, =, or <.
Compare fractions with the same denominators by reasoning about their number of units. Record the results of comparisons with the symbols >, =, or <.
Compare and order fractions using various methods.
Understand fractions as numbers.
3.NF.A
Topic E: Line Plots
Measure lengths to the nearest half inch.
3.MD.B.4
Generate measurement data and represent it in a line plot.
Create line plots (dot plots).
3.G.A.2 — Partition shapes into parts with equal areas. Express the area of each part as a unit fraction of the whole. For example, partition a shape into 4 parts with equal area, and describe the area of each part as 1/4 of the area of the shape.
3.MD.B.4 — Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate units— whole numbers, halves, or quarters.
Number and Operations—Fractions
3.NF.A — Develop understanding of fractions as numbers.
3.NF.A.1 — Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b.
3.NF.A.2 — Understand a fraction as a number on the number line; represent fractions on a number line diagram.
3.NF.A.2.A — Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.
3.NF.A.2.B — Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.
3.NF.A.3 — Explain equivalence of fractions in special cases, and compare fractions by reasoning about their size.
3.NF.A.3.A — Understand two fractions as equivalent (equal) if they are the same size, or the same point on a number line.
3.NF.A.3.B — Recognize and generate simple equivalent fractions, e.g., 1/2 = 2/4, 4/6 = 2/3). Explain why the fractions are equivalent, e.g., by using a visual fraction model.
3.NF.A.3.C — Express whole numbers as fractions, and recognize fractions that are equivalent to whole numbers. Example: express 3 in the form 3 = 3/1; recognize that 6/1 = 6. Example: locate 4/4 and 1 at the same point of a number line diagram.
3.NF.A.3.D — Compare two fractions with the same numerator or the same denominator by reasoning about their size. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model.
2.G.A.3
2.G.A.3 — Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc., and describe the whole as two halves, three thirds, four fourths. Recognize that equal shares of identical wholes need not have the same shape.
2.MD.A.1
2.MD.A.1 — Measure the length of an object by selecting and using appropriate tools such as rulers, yardsticks, meter sticks, and measuring tapes.
2.MD.A.2 — Measure the length of an object twice, using length units of different lengths for the two measurements; describe how the two measurements relate to the size of the unit chosen.
2.MD.B.6 — Represent whole numbers as lengths from 0 on a number line diagram with equally spaced points corresponding to the numbers 0, 1, 2, …, and represent whole-number sums and differences within 100 on a number line diagram.
2.MD.D.9
2.MD.D.9 — Generate measurement data by measuring lengths of several objects to the nearest whole unit, or by making repeated measurements of the same object. Show the measurements by making a line plot, where the horizontal scale is marked off in whole-number units.
4.MD.B.4 — Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection.
4.NF.A — Extend understanding of fraction equivalence and ordering.
4.NF.B
4.NF.B — Build fractions from unit fractions by applying and extending previous understandings of operations on whole numbers.
The Number System
6.NS.C.6
6.NS.C.6 — Understand a rational number as a point on the number line. Extend number line diagrams and coordinate axes familiar from previous grades to represent points on the line and in the plane with negative number coordinates.
Shapes and Their Perimeter | CommonCrawl |
Why Hamiltonian dynamics is better than random walk proposal in MCMC in some cases?
The Hamiltonian dynamics always outperforms than random walk in Metropolis algorithm in some cases. Could someone explain the reason with simple words without too much mathematics?
Fly_back
Fly_backFly_back
$\begingroup$ @JuhoKokkala, generally, in high dimension problem, the random walk proposal doesn't have good performance, however, the hamitonial dynamics have. $\endgroup$ – Fly_back Feb 13 '17 at 6:37
$\begingroup$ @JuhoKokkala My understanding about HMC is that, we get samples with low energy H in hamiltonian dynamic system, then I come up with this quiz that why the sample proposed by hamiltonian dynamics can always be accepted. $\endgroup$ – Fly_back Feb 13 '17 at 6:51
$\begingroup$ In early November, Andrew Gelman posted a note about a "beautiful new paper" by Michael Betancourt on why HMC is better than random MCMC. Gelman's main point was that HMC is at least twice as fast as competing methods. andrewgelman.com/2016/11/03/… $\endgroup$ – DJohnson Feb 13 '17 at 16:48
$\begingroup$ This question is a little underspecified, but given the answers posted below, I don't think it's too unclear to be answered. I'm voting to leave open. $\endgroup$ – gung♦ Feb 13 '17 at 17:58
First of all, let me state that I don't believe that the acceptance rate for HMC (Hamiltonian Monte Carlo) is always higher than for the Metropolis algorithm. As noted by @JuhoKokkala, the acceptance rate of Metropolis is tunable and high acceptance rate doesn't mean your algorithm is doing a good job of exploring the posterior distribution. If you just use an extremely narrow proposal distribution (for example $T(q|q')=\mathcal{N}(q',\sigma I)$ with a very small $\sigma$), you will get an extremely high acceptance rate, but just because you're basically staying always at the same place, without exploring the full posterior distribution.
What I think you are really asking (and if I'm right, then please edit your question accordingly) is why Hamiltonian Monte Carlo has (in some cases) better performance than Metropolis. With "better performance" I mean that, for many applications, if you compare a chain generated by HMC with an equal-length (same number of samples $N$) chain generated by the Metropolis algorithm, the HMC chain reaches a steady state sooner than the Metropolis chain, finds a lower value for the negative log-likelihood (or a similar value, but in less iterations), the effective sample size is smaller, the autocorrelation of samples decays faster with lag, etc..
I'll try to give an idea of why that happens, without going too much into mathematical details. So, first of all recall that MCMC algorithms in general are useful to compute high-dimensional integrals (expectations) of a function (or more functions) $f$ with respect to a target density $\pi(q)$, expecially when we don't have a way to directly sample from the target density:
$\mathbb{E}_{\pi}{[f]}=\int_{\mathcal{Q}} f(q)\pi(q)\text{d}q_1\dots\text{d}q_d$
where $q$ is the vector of $d$ parameters on which $f$ and $\pi$ depend, and $\mathcal{Q}$ is the parameter space. Now, in high dimensions, the volume of the parameter space which contributes the most to the above integral is not the neighborhood of the mode of $\pi(q)$ (i.e., it's not a narrow volume around the MLE estimate of $q$), because here $\pi(q)$ is large, but the volume is very small.
For example, suppose you want to compute the average distance of a point $q$ from the origin of $\mathbb{R}^d$, when its coordinates are independent Gaussian variables with zero mean and unit variance. Then the above integral becomes:
$\mathbb{E}_{\pi}{[X]}=\int_{\mathcal{Q}} ||q||(2\pi)^{-d/2}\exp{(-||q||^2/2)}\text{d}q_1\dots\text{d}q_d$
Now, the target density $\pi(q)=(2\pi)^{-d/2}\exp{(-||q||^2/2)}$ has obviously a maximum at 0. However, by changing to spherical coordinates and introducing $r=||q||$, you can see that the integrand becomes proportional to $r^{d-1}\exp{(-r^2/2)} \text{d}r$. This function has obviously a maximum at some distance from the origin. The region inside $\mathcal{Q}$ which contributes the most to the value of the integral is called the typical set, and for this integral the typical set is a spherical shell of radius $R\propto\sqrt{d}$.
Now, one can show that, in ideal conditions, the Markov chain generated by MCMC first converges to a point in the typical set, then starts exploring the whole set, and finally continues to explore the details of the set. In doing this, the MCMC estimate of the expectation becomes more and more accurate, with bias and variance which reduce with increasing number of steps.
However, when the geometry of the typical set is complicated (for example, if it has a cusp in two dimensions), then the standard random-walk Metropolis algorithm has a lot of difficulties in exploring the "pathological" details of the set. It tends to randomly jump "around" these regions, without exploring them. In practice, this means that the estimated value for the integral tends to oscillate around the correct value, and interrupting the chain at a finite number of steps will result in a badly biased estimate.
The Hamiltonian Monte Carlo tries to overcome this problem, by using information contained in the target distribution (in its gradient) to inform the proposal of a new sample point, instead than simply using a proposal distribution unrelated to the target one. So, that's why we say that HMC uses the derivatives of the target distribution to explore the parameter space more efficiently. However, the gradient of the target distribution, by itself, is not sufficient to inform the proposal step. As in the example of the average distance of a random point from the origin of $\mathbb{R}^d$, the gradient of the target distribution, by itself, directs us towards the mode of the distribution, but the region around the mode is not necessarily the region which contributes the most to the integral above, i.e., it's not the typical set.
In order to get the correct direction, in HMC we introduce an auxiliary set of variables, called momentum variables. A physical analog can help here. A satellite orbiting around a planet, will stay in a stable orbit only if its momentum has the "right" value, otherwise it will either drift away to open space, or it will be dragged towards the planet by gravitational attraction (here playing the role of the gradient of the target density, which "pulls" towards the mode). In the same way, the momentum parameters have the role of keeping the new samples inside the typical set, instead than having them drifting towards the tails or towards the mode.
This is a small summary of a very interesting paper by Michael Betancourt on explaining Hamiltonian Monte Carlo without excessive mathematics. You can find the paper, which goes in considerable more detail, here.
One thing that the paper doesn't cover in enough detail, IMO, is when and why HMC can do worse than random-walk Metropolis. This doesn't happen often (in my limited experience), but it can happen. After all, you introduce gradients, which help you find your way in the high-dimensional parameter space , but you also double the dimensionality of the problem. In theory, it could happen that the slow-down due to the increase in dimensionality overcomes the acceleration given by the exploitation of gradients. Also (and this is covered in the paper) if the typical set has regions of high curvature, HMC may "overshoot", i.e., it could start sampling useless points very far away in the tails which contribute nothing to the expectation. However, this causes instability of the symplectic integrator which is used in practice to implement numerically HMC. Thus, this kind of problem is easily diagnosed.
DeltaIVDeltaIV
$\begingroup$ I see that while I was writing my answer, @DJohnson also cited the paper by Betancourt. However, I think the answer can still be useful as a summary of what one can find in the paper. $\endgroup$ – DeltaIV Feb 13 '17 at 17:42
As @JuhoKokkala mentioned in the comments, high acceptance rate doesn't necessarily give good performance. Metropolis Hastings' acceptance rate can be increased by shrinking the proposal distribution. But, this will cause smaller steps to be taken, making it take longer to explore the target distribution. In practice, there's a tradeoff between step size and acceptance rate, and a proper balance is needed to get good performance.
Hamiltonian Monte Carlo tends to outperform Metropolis Hastings because it can reach more distant points with higher probability of acceptance. So, the question is: why does HMC tend to have higher acceptance probability than MH for more distant points?
MH has trouble reaching distant points because its proposals are made without using information about the target distribution. The proposal distribution is typically isotropic (e.g. a symmetric Gaussian). So, at each point, the algorithm tries to move a random distance in a random direction. If the distance is small relative to how quickly the target distribution changes in that direction, there's a good chance that the density at the current and new points will be similar, giving at least a reasonable chance of acceptance. Over greater distances, the target distribution may have changed quite a bit relative to the current point. So, the chance of randomly finding a point with similar or (hopefully) higher density may be poor, particularly as the dimensionality increases. For example, if the current point lies on a narrow ridge, there's a much greater chance of falling off the ridge than remaining on it.
In contrast, HMC exploits the structure of the target distribution. Its proposal mechanism can be thought of using a physical analogy, as described in Neal (2012). Imagine a puck sliding on a hilly, frictionless surface. The location of the puck represents the current point, and the height of the surface represents the negative log of the target distribution. To obtain a new proposed point, the puck is given a momentum with random direction and magnitude, and its dynamics are then simulated as it slides over the surface. The puck will accelerate in downhill directions and decelerate in uphill directions (perhaps even stopping and sliding back downhill again). Trajectories moving sideways along the wall of a valley will curve downward. So, the landscape itself influences the trajectory and pulls it toward higher probability regions. Momentum can allow the puck to crest over small hills, and also overshoot small basins. The puck's location after some number of time steps gives the new proposed point, which is accepted or rejected using the standard Metropolis rule. Exploiting the target distribution (and its gradient) is what allows HMC to reach distant points with high acceptance rates.
Here's a good review:
Neal (2012). MCMC using Hamiltonian dynamics.
As a loose answer (which seems to be what you are looking for) Hamiltonian methods take into account derivative of log likelihood, while standard MH algorithm does not.
bdeonovicbdeonovic
Not the answer you're looking for? Browse other questions tagged mcmc or ask your own question.
Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
Metropolis-Hastings algorithm, using a proposal distribution other than a Gaussian in Matlab
Understanding Metropolis-Hastings with asymmetric proposal distribution
Confused with MCMC Metropolis-Hastings variations: Random-Walk, Non-Random-Walk, Independent, Metropolis
MCMC with Metropolis-Hastings algorithm: Choosing proposal
How to include prior information about target pdf in MCMC
Does MCMC perform better than a random walk?
Hamiltonian/Hybrid MCMC 'mass matrix' terminology
MCMC Sampler not Converging for Particular Function
Understanding Adaptive Metropolis MCMC by Haario et al. 2001 | CommonCrawl |
AdS/CFT and Integrability
As I indicated, I'm not able to give a comprehensive account of the past week string school at CERN. I can't help that fluxes make me sick in my stomach, or that counting BH microstates has a similar effect on me as counting sheep. Nevertheless, I listened with somewhat unexpected pleasure to the lectures on Integrability and AdS/CFT by Nick Dorey. This is a cute topic in mathematical physics that I had known nothing about before. Nick's lectures gave me a smattering of idea of what's going on, and I'm sharing a few bits and pieces that have made their way to my long-term memory.
4D maximally supersymmetric Yang-Mills theory is dual to 10D IIB superstrings on AdS5xS5, Maldacena dixit. While many aspects of these two theories are fixed by their powerful symmetries, there is still a lot to learn about the dynamics. Some help may come from the integrable structures that have recently been discovered on both sides of the duality.
Integrability is a very non-generic feature of classical or quantum systems that there are as many conserved charges as there are degrees of freedom. In classical mechanics, this would mean that the system can be fully solved by quadratures. Quantum mechanics is more tricky, but there still exists a method called the Bethe ansatz for finding the exact solutions.
The relevance of integrability in the context of SU(N) super Yang-Mill was pointed out in the paper by Minahan and Zarembo. Integrable structures pop out in the process of computing correlation functions of certain operators in perturbation theory. For example, we can compute gauge invariant local correlators of the scalars that are present in the theory. We pick up two of the three scalars, W and Z, and compute the conformal dimension of the operator
$\langle Z^{L-M} W^M \rangle $
or similar ones with different permutations of Z and W under the trace. The classical scaling dimension of this operator is L (the length of the chain), but there are divergent loop corrections that introduce an anomalous dimension. The additional complication is that loop corrections mix operators with various M, so that we have to deal with a matrix of anomalous dimensions that has to be diagonalized. The eigenvectors correspond to operators with definite scaling dimensions.
Now, the scalars W,Z form a doublet under the SU(2) subgroup of the SO(6) R-symmetry so we can call them spin up and spin down. It looks more fashionable to represent the operators as spin chains, for example
$\langle WWZWWZ\rangle \to |up,up,down,up,up,down>$
It turns out that this analogy is more far reaching. One-loop computations simplify in the large N limit of SU(N) because the planar diagrams can only "flip one spin". One finds that the matrix of anomalous dimensions is given by
$\frac{\lambda}{8 \pi^2}\Sigma_1^L(1 - P_{l,l+1})$
where $\lambda$ is the t'Hooft coupling and P is an operator that exchanges the neighboring spins. A trained eye recognizes in the above the Hamiltonian of the Heisenberg spin chain with nearest neighbor interactions. One can see that a vector with all spins up (the ferromagnetic vacuum) is an eigenvector, but simple vectors with one spin flipped to down are not. Nevertheless, the full spectrum of this system can be found exactly and the eigenvalue problem was solved in the 1930s by Bethe with the help of the Bethe ansatz (the connection to integrability was made much later by Faddeev). The whole spectrum can be constructed out of the combinations of vectors with one spin down, the so-called magnons.
The story is continues on the string theory side duality, as shown in the paper by Hofman and Maldacena. But I stop here, since all these intricate connections make my head spinning.
The video and transperencies should be available via the school's web page. But they are not. A commenter pointed out that there are some technical problems to which string theory has no solution for the moment.
Dark Universe at CERN
String School @ CERN
Non-Gaussian CMB
DGP inside DGP
CERN TH 2008 | CommonCrawl |
Simple Harmonic Motion MCQ
In this page we have Important Objective type questions on Simple Harmonic Motion for JEE main/Advanced . Hope you like them and do not forget to like , social share and comment at the end of the page.
Linked type Comprehensions
(A)A body of mass 36 g moves with SHM of amplitude A=13 cm and period T=12s
At t=0 x=+13 cm
Find the velocity when x=5 cm
(a) $\pm6.28 \ cm/sec$
(b) $\pm6.00 \ cm/sec$
(c) $\pm7.28 \ cm/sec$
(d) $\pm5.28 \ cm/sec$
We know that for SHM
$v=\pm\omega\sqrt{\left(A^2-x^2\right)}$
Where A is amplitude and $\omega=2 \pi /T$
Solving we get
$v=\pm6.28cm/sec$
Find the displacement at t=2 sec
(a) 7.0 cm
(b) 6.5 cm
(c) 6 cm
(d) None of these
We know that
$x=Acos{\omega}t$
Putting all values
X=6.5 cm
Find the maximum acceleration and maximum velocity
(a) 3.00 cm2/sec, 6.8 cm/sec
(b) 3.56 cm2/sec, 6.0 cm/sec
(c) 3.2 cm2/sec, 6.1 cm/sec
(d) 3.56 cm2/sec, 6.8 cm/sec
$v_{max}=\omega A$
=6.8 cm/sec
$a_{max}=\omega^2A$
=3.56 cm2/sec
Find the equation of motion of the body
(a) $x=Acos{\omega}t$
(b) $x=Acos{(}\omega t+\pi)$
(c) $x=Acos{(}\omega t-\pi)$
At t=0 x=A
Find the force acting on the body when t=2 sec
(a) -64 dyne
(b) -60 dyne
(c) 0 dyne
$a=-\omega^2x$
At t=2 x=6.5 cm
F=ma
=-64 dyne
A solid cylinder is attached to a horizontal massless spring so that it can roll with slipping along the horizontal surface. The spring constant is K .Mass of the cylinder is M.
The system is released from rest where the spring is stretched by x..The Center of mass of the cylinder execute SHM with time period T.Pick the correct value of T
(a) $T=2\pi\sqrt{\frac{3M}{2K}}$
(b) $T=2\pi\sqrt{\frac{2M}{3K}}$
(c) $T=2\pi\sqrt{\frac{M}{K}}$
If at point, the stretch in the spring is x and velocity of center of mass of the cylinder is v
Potential energy=$\frac{1}{2}Kx^2$
Translational kinetic energy
$=\frac{1}{2}Mv^2$
Rotational kinetic energy
$=\frac{1}{2}I\omega^2=\frac{1}{4}Mv^2$
So total energy of the system
$ E=\frac{1}{2}Mv^2+\frac{1}{4}Mv^2+\frac{1}{2}Kx^2$
$E=\frac{3}{4}Mv^2+\frac{1}{2}Kx^2$
Now for SHM
$\frac{dE}{dt}=0$
$T=2\pi\sqrt{\frac{3M}{2K}}$
A mass M at the end of a spring executes SHM with a period t1 while the same mass execute SHM with a period t2 for another spring. T is the period of oscillation when the two springs are connected in series and Mass M is attached at the end.
Find out the correct relation
(a) $\frac{1}{T}=\frac{1}{t_1}+\frac{1}{t_2}$
(b) $T=t_1+t_2$
(c) $T^2=t_1^2+t_2^2$
(d) $\frac{1}{T^2}=\frac{1}{t_1^2}+\frac{1}{t_2^2}$
Time period for SHM in spring is given by
$T=2\pi\sqrt{\frac{M}{K}}$
Where K is the spring constant
Let assume $K_1$ and $K_2$ are the spring constant for first and second spring respectively
Then as per given data
$t_1=2\pi\sqrt{\frac{M}{K_1}}$
Now for series combination, effective spring constant
$K=\frac{K_1K_2}{K_1+K_2}$
$T=2\pi\sqrt{\frac{M(K_1+K_2)}{K_1K_2}}$
$T^2=4\pi^2\frac{M(K_1+K_2)}{K_1K_2}$
So it is clear that
$T^2=t_1^2+t_2^2$
Consider a mass –spring system.This system is given an initial displacement ,it begin to oscillate with frequency f1 .System is now bring to rest and again it is given different displacement and f2 be its frequency of oscillation then frequencies
(a) f1 = f2
(b) f1 > f2
(c) f1 < f2
(a) f1=f2
Because frequency is not dependent on amplitude of motion
The instantaneous displacement of a particle of mass m executing SHM under a force constant k is
$x=Asin{(}\omega t+\varphi)$
Where $\omega=\sqrt{\frac{k}{m}}$
The time average of kinetic energy over a Time period T is
(a) $kA^2$
(b) $\frac{1}{4}kA^2$
(c) $\frac{1}{3}kA^2$
(d) $\frac{1}{2}kA^2$
Ans is (b)
$ K=\frac{1}{2}mv^2=\frac{1}{2}m\omega^2A^2{cos}^2{(}\omega t+\varphi)$
Average KE for one periodic motion is
$K_{avg}=\frac{\int_{0}^{T}Kdt}{\int_{0}^{T}dt}$
$=\frac{1}{T}\int_{0}^{T}{\frac{1}{2}m\omega^2A^2{cos}^2{(}\omega t+\varphi)dt}$
$=\frac{m\omega^2A^2}{2T}\int_{0}^{T}\left(\frac{cos{2}(\omega t+\varphi)+1}{2}\right)dt$
$=\frac{m\omega^2A^2}{2T}\left[\int_{0}^{T}{\frac{1}{2}dt+\int_{0}^{T}{\frac{cos{2}(\omega t+\varphi)}{2}dt}}\right]$
$=\frac{m\omega^2A^2}{4T}\left[T+\left[\frac{sin{2}(\omega t+\varphi)}{2\omega}\right]_0^T\right]$
Now $T=\frac{2\pi}{\omega}$
$=\frac{m\omega^2A^2}{4}$
Now as $\omega=\sqrt{\frac{k}{m}}$
=> $K_{avg}=\frac{1}{4}kA^2$
For small amplitude of oscillations potential energy curve w.r.t distance travelled from equilibrium position is
(a) Parabolic
(b) Hyperbolic
(c) Elliptical
(d) circular
$U=\frac{1}{2}kx^2$
Equation of a parabola is $y^2=4ax$
Here $x^2=\frac {2U}{k}$
so a is correct answer
The homogenous linear differential equation
$\frac{d^2x}{dt^2}+2r\frac{dx}{dt}+\omega^2x=0$ Represents the equation of
(a) Simple harmonic oscillator
(b) Damped harmonic oscillator
(c) Forced harmonic oscillator
Answer is (b)
We know equation of damped harmonic oscillator is
$m\ddot{x}+\gamma\dot{x}+kx=0$
=> $\ddot{x}+2\frac{\gamma}{2m}\dot{x}+\frac{k}{m}x=0$
Putting $r=\frac{\gamma}{2m}$
$\omega^2=\frac{k}{m}$
Equation becomes
$\ddot{x}+2r\dot{x}+\omega^2x=0$
Given the maximum velocity and acceleration of a harmonic oscillator as vmax and amax respectively, its time period in terms of vmax and amax is
(a) $\frac{2\pi v_{max}}{a_{max}}$
(b) $\frac{2\pi a_{max}}{v_{max}}$
(c) $2\pi a_{max}v_{max}$
(d) $\frac{\pi v_{max}}{a_{max}}$
Answer is (a)
So $\frac{a_{max}}{v_{max}}=\omega$
Now $\omega=\frac{2\pi}{T}$
So $T=\frac{2\pi v_{max}}{a_{max}}$
Which of the following function represents a simple harmonic oscillation
(a) $sin \omega t-cos \omega t$
(b) $sin^2 \omega t$
(c) $sin \omega x+sin 2 \omega t$
(d) $sin \omega x-sin 2 \omega t$
(a) is the correct answer
The period of oscillation of a simple pendulum of length L suspended from the roof of a vehicle which moves without friction down an inclined plane of inclination $\alpha$, is given by
(a) $T= 2 \pi \sqrt {\frac {L}{g cos \alpha}}$
(b) $T= 2 \pi \sqrt {\frac {L}{g}}$
(c) $T= 2 \pi \sqrt {\frac {L}{g sin \alpha}}$
(d) $T= 2 \pi \sqrt {\frac {L}{g tan \alpha}}$
Simple Harmonic Motion
Equation of SHM
Characterstics of SHM
Velocity of SHM
Acceleration of SHM
Total Energy of SHM
Motion of a body suspended from a spring
Simple pendulum
The compound pendulum
Damped Oscillations
Driven or Forced Harmonic oscillator
SHM Multiple Choice Questions
SHM Problems | CommonCrawl |
DockRMSD: an open-source tool for atom mapping and RMSD calculation of symmetric molecules through graph isomorphism
Eric W. Bell1 &
Yang Zhang ORCID: orcid.org/0000-0002-2739-19161
Comparison of ligand poses generated by protein–ligand docking programs has often been carried out with the assumption of direct atomic correspondence between ligand structures. However, this correspondence is not necessarily chemically relevant for symmetric molecules and can lead to an artificial inflation of ligand pose distance metrics, particularly those that depend on receptor superposition (rather than ligand superposition), such as docking root mean square deviation (RMSD). Several of the commonly-used RMSD calculation algorithms that correct for molecular symmetry do not take into account the bonding structure of molecules and can therefore result in non-physical atomic mapping. Here, we present DockRMSD, a docking pose distance calculator that converts the symmetry correction to a graph isomorphism searching problem, in which the optimal atomic mapping and RMSD calculation are performed by an exhaustive and fast matching search of all isomorphisms of the ligand structure graph. We show through evaluation of docking poses generated by AutoDock Vina on the CSAR Hi-Q set that DockRMSD is capable of deterministically identifying the minimum symmetry-corrected RMSD and is able to do so without significant loss of computational efficiency compared to other methods. The open-source DockRMSD program can be conveniently integrated with various docking pipelines to assist with accurate atomic mapping and RMSD calculations, which can therefore help improve docking performance, especially for ligand molecules with complicated structural symmetry.
Computer-aided drug design, in particular protein–ligand docking, has brought about the discovery of many biologically active drugs [1, 2]. In many protein–ligand docking programs, a flexible small molecule structure is docked in a rigid protein receptor structure in order to find the optimal binding conformation and affinity of the small molecule within the protein binding pocket. Since the ability of these programs to accurately assess binding affinity is dependent on their ability to find the optimal conformation of the ligand in the protein binding pocket, docking programs are often benchmarked by their ability to reproduce the native binding pose of a ligand from a protein–ligand complex crystal structure. A common metric used to evaluate distance between the predicted pose and the native pose, given a superposition of their protein receptor structures, is the root mean square deviation (RMSD) between their respective atoms (Eq. 1):
$$RMSD = \sqrt {\frac{1}{N}\mathop \sum \limits_{i = 1}^{N} d_{i}^{2} }$$
where N is the number of atoms in the ligand, and di is the Euclidean distance between the ith pair of corresponding atoms.
Docking RMSD can be most naïvely calculated with the assumption of direct atomic correspondence, or in other words, the assumption that the atomic labels between ligand structures in the given structure files are ordered and should remain static in the docking process. This assumption holds for asymmetric molecules like caffeine (Fig. 1a), but this correspondence is not always practically relevant for molecules with symmetric functional groups (e.g. ibuprofen, Fig. 1b) or whole-molecule symmetry (e.g. the pyrrolidine-based inhibitor of HIV-1 protease [3] in Fig. 1c), as they can give rise to binding poses that are identical in terms of chemistry, but not in terms of correspondence. Here, ibuprofen and HIV-1 protease pyrrolidine-based inhibitor have been chosen as illustrative examples, although there are various other molecules with symmetric structures in which naïve correspondence can result in false inflation of RMSD (e.g. the inhibitor BEA403 [4], c-di-GMP [5], etc.). For example, if one were to perfectly overlap two benzene molecules, their docking RMSD would have a value of zero. If one were to then rotate one molecule along one of its axes of symmetry until the two structures overlapped perfectly again, their docking RMSD should be zero due to the chemical identity of the overlap; since the overlapping atoms are differently labeled between the two molecules in this example, naïve docking RMSD would have a nonzero value. Therefore, molecular symmetry needs to be taken into account in order to derive an accurate docking RMSD value.
Examples of a an asymmetric ligand (PDB Ligand ID: CFF); b a slightly symmetric ligand (PDB Ligand ID: IBP); c a highly symmetric ligand (PDB Ligand ID: QN3). d An example ligand structure (left) and the resulting ligand structure when the atoms are reordered according to the optimal query-template atomic correspondence generated by the Hungarian method (right). Since the Hungarian method only takes atom type into account and not the bonds between atoms, the hypothetical molecule proposed by the Hungarian correspondence is physically impossible
Several docking programs have implemented docking RMSD modules to accommodate ligand symmetry. AutoDock Vina [6] was one of the first to implement symmetry correction in docking RMSD calculation, providing a module that creates correspondence by mapping each atom of one pose to the closest atom of the same type from the other pose. However, this method allows the potential for atoms that are close between the two structures to be used repeatedly and atoms that are distant to not be used at all. In response to this, Allen and Rizzo [7] implemented their own docking RMSD calculator in DOCK6 [8] which presents atomic correspondence mapping as a cost-minimization assignment problem, solved by using the Hungarian algorithm [9, 10]. However, considering the mapping problem in this way ignores the bonding structure of the ligand, and can potentially provide nonphysical assignments (Fig. 1d) and docking RMSD values that are lower than what should be physically possible. Several other docking programs, such as GOLD [11], AmberTools [12], and Glide [13, 14] also contain modules that calculate symmetry-corrected RMSD, but these modules generally do not publicly offer thoroughly detailed explanations of their symmetry correction algorithms and demand that the user install a much larger package to calculate symmetry corrected RMSD. Finally, OpenBabel [15] contains a C++ open source tool, obrms, that considers symmetry correction as a graph isomorphism problem, solved by the VF2 algorithm [16], but also currently requires that the user install the entirety of OpenBabel to use this tool. Docking RMSD calculated by these modules is distinct from conformational distance metrics calculated by programs such as LS-align [17] and RDKit [18], as these metrics are based on a superposition of the ligand structures themselves, not the receptor on which they are docked. Such a superposition is inappropriate for evaluation of docking poses due to the lack of consideration of the position and orientation of the ligand relative to the receptor; it is more appropriate for purely cheminformatic problems, such as ligand structural similarity comparisons. Therefore, there exists a need for a universal docking RMSD calculation module that properly considers molecular symmetry and does so with a clear, detailed description of its methodology.
Here we propose a new, open-source module, DockRMSD, to solve the atom mapping issue for symmetric molecular structures through graph isomorphism, where the optimal docking RMSD is calculated by searching through a pruned state space of all isomorphic mappings between two molecular structures. Source code in C, compiled binaries, and a web server implementation of DockRMSD are made freely available at the DockRMSD web site [19].
A general overview of the DockRMSD algorithm is presented in Fig. 2. To begin, the user provides a pair of structure files in MOL2 format, each containing a specific pose of the same ligand. The first file is arbitrarily defined as the "query" structure and the second as the "template" structure, for convenience of description. The elements of the heavy (non-H) atoms present in each structure, the coordinates of those atoms, and the bonding network between the pairs of atoms are read from the structure files. Subsequently, the atom and bond sets are compared in order to ensure that the two structures are of the same ligand molecule. Bonds are represented by a symmetric two-dimensional array which contains a string corresponding to bond type (single = "1", double = "2", aromatic = "ar", etc.) between bonded atoms i and j at array position [i, j], and contains empty strings otherwise. If the bond types do not agree between the two files, the bond network is stripped of bond types, preserving only which atoms are bonded.
The DockRMSD algorithm. DockRMSD calculates the optimal atom mapping and RMSD value for any given pair of poses for the same ligand, input as a pair of MOL2 structure files
Once the ligand structural information has been extracted, the next step is to determine the set of template atoms to which each query atom is chemically identical, referred to as the atom identity search. For each atom of the query structure, all atoms of the template structure of the same element are initially considered to be candidate mapping partners. Then, the set of atoms that the query atom is bonded to, as well as the bond types between them, is evaluated against the set of atoms and bonds present for each candidate mapping partner in the template; candidate template atoms are eliminated if their bonding structure does not match the query atom. This process is repeated, checking for identity between not only their set of bonded neighbor atoms, but the neighbors of those neighbor atoms as well, once again removing candidate atoms if the sets are not identical. A deeper search involving further neighbor atoms was attempted, but it was found that including more neighbors ultimately did not change the final optimal correspondence. Therefore, the identity search stops at this neighbor atom depth in order to minimize unnecessary neighbor set comparisons and optimize runtime. If more than one candidate remains, there is likely more than one atom in the template that is chemically identical to the query atom, meaning that the ligand has some degree of symmetry. Once the atom identity search is complete, each query atom will have a set of template atoms that are chemically equivalent to that query atom. For a completely asymmetric molecule (Fig. 1a), each query atom will only have one corresponding template atom. Therefore, calculating the optimal RMSD for these asymmetric molecules is a simple task of matching each query atom to its respective template atom and returning the RMSD calculated from this correspondence. However, for symmetric molecules, one must search through all possible assignments of template atoms in order to find the mapping whose RMSD is minimal. The putative computational expense of this search is calculated as the total number of possible mappings, i.e. the product of each query atoms' candidate atom set length.
In order to find the deterministically optimal mapping between query and template atoms, an exhaustive assignment search reminiscent of the VF2 algorithm [16] coupled with Dead-End Elimination (DEE) [20] is implemented. In this procedure, query atoms are iteratively assigned the template label that provides the smallest squared interpose distance and can feasibly added to the existing assignments. The first of these feasibility criteria is if the candidate template atom that is being assigned has already been assigned to a previous query atom, this assignment is disallowed. This restriction ensures that all mappings are one-to-one such that no template atom is mapped to more than one query atom. Second, if the query atom that is currently being mapped is bonded to already mapped atoms, the template bonding network is checked to ensure that a bond also exists in the template between the labels given to those atoms from the query. If the bonds being formed by query atom assignment are not supposed to be formed according to the template, the proposed assignment is not feasible. Finally, the last feasibility criterion is DEE, which ceases assignment of a particular atom if all subsequent feasible assignments would result in an RMSD larger than the smallest heretofore observed RMSD (which is infinity if no RMSD has yet been observed). The query atoms are mapped in order of number of possible template labels (smallest first), then number of bonds to already mapped query atoms (largest first), and finally, the order in which they appear in the query file (smallest first). Once all query atoms have been assigned to template atoms, this correspondence is used to calculate RMSD. The minimum RMSD for all mappings and the mapping that gave rise to that RMSD are then printed by the program. In addition, the number of possible mappings is printed.
Docking conformation dataset and generation
To evaluate DockRMSD's symmetry correction and the reliability of the greedy search heuristic, we generated docking conformations based on the CSAR Hi-Q protein-ligand dataset [21]. This dataset contains 343 protein structures with manually refined binding pockets, each in complex with their respective ligand, where the docking decoy conformations have been generated by ourselves using the AutoDock Vina program [6]. For each protein–ligand pair, the native ligand structure was removed, conformationally randomized using OpenBabel [15], and re-docked into the binding pocket using AutoDock Vina [6]. The generation of input PDBQT files for docking and the output file conversion from PDBQT to MOL2 was performed by OpenBabel. Docking RMSD was calculated between all 10 possible pairwise combinations of the top five poses generated from a single re-docking experiment, leading to a total of 3430 RMSD calculations (10 per protein–ligand pair, 343 protein–ligand pairs in total). All 3430 calculations were performed using a list of different programs on a Red Hat Enterprise Linux machine with an Intel i5-4590 CPU @ 3.30 GHz. The average total walltime for all 3430 RMSD calculations was 4.8 ± 0.7 s, 5.3 ± 0.9 s, and 60.1 ± 0.1 s for DockRMSD, naïve RMSD, and obrms, respectively (see "DockRMSD runtime comparison" section for a more detailed runtime analysis).
Here, naïve RMSD calculations relative to the native crystal structure pose were not calculated because the AutoDock Vina ligand preparation process removes direct atomic correspondence between the redocked ligand and the native ligand. AutoDock Vina re-orders the atoms of the ligand according to the ligand's torsional tree, and therefore, all Vina poses have direct correspondence with each other, but not with the original native ligand structure. Therefore, only programs that can find atomic correspondence between files can be used to compare the Vina poses to the native crystal structure pose. This limitation is why the dataset used to evaluate the programs consists only of docked poses; direct correspondence cannot be drawn between the native crystal structure and Vina-generated poses. Ligand structures have been visualized using UCSF Chimera [22].
Docking RMSD calculation through DockRMSD
To examine the impact of symmetry correction in docking RMSD calculation, we compare in Fig. 3a the symmetry-corrected RMSD calculated from DockRMSD and the naïve RMSD which was calculated from the default atom order of the structure files. While 2109 of the 3430 cases require no symmetry correction, the remaining 1321 (38.5%) are cases where adhering to naïve RMSD artificially inflates the docking RMSD, by more than 2 Å in 54 of these cases (Table 1). The most extreme examples of this are when a ligand molecule is large and possesses a mirror plane of symmetry, and when the ligand poses roughly overlap. For these cases, determining the optimal mapping is essential because misplaced correspondence will give rise to unreasonably large interatom distances, especially when compared to the relatively small "true" RMSD. An example of a Huperzine A-based ligand of acetylcholinesterase [23] is shown in Fig. 3b, where the two halves of the molecule are chemically identical to another and by eye should have a relatively small RMSD value. DockRMSD's calculation aligns with this rough assessment, calculating an RMSD value of 3.42 Å. However, due to the fact that the query is flipped relative to the template, naïve RMSD considers this reorientation an important distinction, and therefore calculates the RMSD to be 10.74 Å.
a Ligand RMSD calculated by DockRMSD versus that by the naïve RMSD calculations on 1321 ligand molecules with symmetric structures. b An example pair of poses where naïve RMSD calculation failed to provide the optimal RMSD due to molecular symmetry (Ligand PDB ID: E10; Receptor PDB ID: 1H22 [23]). Interpose correspondence between oxygen atoms is drawn to represent the source of the RMSD disagreement by different methods
Table 1 Counts of 3430 total RMSD calculations whose error relative to the deterministic DockRMSD calculation is zero, small (nonzero but smaller than 2.0 Å), or large (greater than 2.0 Å)
In Fig. 4a, we present a comparison between the RMSD of DockRMSD and that calculated by the Hungarian algorithm, which has been adopted by several established methods, such as DOCK6 [8]. In the Hungarian algorithm, the mapping is generated through iterative manipulation of a cost matrix (i.e. an interatom distance matrix) such that a pattern of zero values corresponding to the optimal assignment appears. The performance of the Hungarian algorithm was evaluated using a Python implementation of the docking RMSD calculation procedure similar to what is described by Allen and Rizzo [7]. The script uses the Python Munkres package [24] to generate query-template atomic correspondence such that assignments can only be made between atoms of the same element. As explained above, the laxness of this algorithm causes it to over-optimize and generate RMSD values below what should be possible. As a result, in nearly every case analyzed, the Hungarian algorithm generated an RMSD value below the optimal answer found by DockRMSD (3269 of 3430 RMSD calculations, 95.3%; Table 1). This implies that the over-correction issue introduced by the Hungarian algorithm is not trivial.
a Comparison of the Hungarian algorithm against DockRMSD for the 3112 molecules whose RMSD was underestimated by the Hungarian algorithm. b Comparison of the Hungarian algorithm against DockRMSD for the 190 molecules whose RMSD was underestimated by the Hungarian algorithm in the native ligand pose benchmark. c An example pair of poses where the Hungarian algorithm grossly overcorrected for symmetry due to its insensitivity to global molecular topology (Ligand PDB ID: BEG; Receptor PDB ID: 1D4I [25]). Interpose correspondence between central carbon atoms and nitrogen atoms is drawn to represent the source of the RMSD disagreement. Hungarian correspondence is drawn in red to demonstrate that the correspondence should not be allowed according to the chemical inequivalence of the atoms bonded to each atom of the pair
In contrast to the comparison between DockRMSD and naïve RMSD, the largest discrepancies between DockRMSD and the Hungarian algorithm are present in near mirror-symmetric molecules whose poses overlap almost exactly. As an illustrative example, we present in Fig. 4c a result from the HIV-1 protease inhibitor BEA425 [25], where the poses presented look nearly identical by eye, and thus, one would anticipate the RMSD value should be low. However, this molecule is not truly symmetric due to a hydroxyl group near the center of the molecule, and therefore, the two poses are not truly chemically identical. Since the Hungarian algorithm only takes into account individual atom types and not global chemical identity, cases like these fool the algorithm into accepting regions of local correspondence at the cost of properly considering which atoms are bonded. Although the algorithm generates lower RMSD values, these values do not reflect correct correspondence of the atomic mapping derived from the ligand bonding structures.
Here, it is noted that the above RMSD calculations are performed on the AutoDock Vina docked conformations, which was chosen purely to enable the comparison of different RMSD calculation programs with direct correspondence. In fact, one of the most common applications of ligand RMSD calculation is for benchmarking experiments that evaluate a docking program's ability to produce ligand poses that closely resemble the native conformation. In such experiments, poses are typically considered "near-native" if their RMSD relative to the native pose is ≤ 2.0 Å. In order to examine the performance of different programs with respect to this task, the top ranked AutoDock Vina pose for each of the 343 protein–ligand pairs was compared against the crystal structure pose of the ligand as provided by the CSAR Hi-Q set using both DockRMSD and the Hungarian algorithm, the results of which are presented in Fig. 4b and Table 2. It was shown that in 190 of the 343 cases, the Hungarian algorithm resulted in a lower value than the optimal value as determined by DockRMSD, where 10 of them would have resulted in a false positive classification of a "near native" pose. These results demonstrate that evaluation of a docking algorithm by RMSD values using incorrect atomic correspondences can lead to artificial inflations of docking results.
Table 2 A contingency table for 343 RMSD calculations between docked ligand poses and their respective native crystal structure ligand poses, calculated both by DockRMSD and the Hungarian algorithm
DockRMSD runtime comparison
In order to evaluate the runtime efficiency of DockRMSD, both naïve and symmetry-corrected RMSD calculations on all 3430 pose pairs were compared to the runtimes of obrms. The obrms package is a tool from OpenBabel that calculates RMSD through solving the graph isomorphism problem using a similar algorithm relative to DockRMSD. The values calculated between obrms and DockRMSD (if the bond type information is not used in DockRMSD) are identical; therefore, the most poignant comparison between these two programs is to determine how quickly they respectively come to the correct answer. The results of this experiment are summarized in Fig. 5, with runtimes being log-transformed to more closely resemble normal distributions. As is shown, every calculation performed by DockRMSD was faster than the fastest calculation made by obrms, which is consistent with the statistically significant difference between their average runtimes (t = 310.6, p < 10−20,000 by one-tailed paired t-test). The difference between symmetry-corrected and symmetry-uncorrected runtime is also statistically significant (t = 43.9, p < 10−400 by one-tailed paired t-test), but the magnitude of mean difference between DockRMSD and obrms (1.04 log10(seconds)) is much larger than between symmetry-corrected and naïve runtime (0.21 log10(seconds)). This data suggests that while the impact of symmetry correction on RMSD calculation time is observable, its impact on runtime relative to obrms, which performs a similar symmetry correction, is minimized.
Box and whisker plots of the walltime distributions (in log10(sec)) for each of the 3430 RMSD calculations as calculated by symmetry-corrected DockRMSD, symmetry-uncorrected naïve RMSD, and symmetry-corrected obrms
While a good portion of this runtime difference can be attributed to the fact that obrms is implemented using OpenBabel's object-oriented framework and thus leads to the instantiation of more computationally intensive data structures than is necessary for this problem, DEE also contributes to the increased efficiency of DockRMSD. As an illustrative example of DEE's power, a buckminsterfullerene (C60) molecule was docked onto tRNA-Guanine Transglycosylase [26] using AutoDock Vina, and subsequently, docking RMSD was calculated between the top five poses using DockRMSD without DEE, DockRMSD with DEE, and obrms for runtime analysis. The choice of receptor was random and arbitrary in this experiment; docking on this receptor was only a means to generate hypothetical poses for the ligand and implies no greater biological relevance. However, the reason buckminsterfullerene molecule was chosen as the ligand is that it is one of the most highly symmetric molecules that has been observed in nature: each carbon is chemically identical to every other carbon in the molecule, leading to a total state space of 6060 possible mappings, a greater number of mappings than there are atoms in the universe. Therefore, proper pruning of the mapping search space is essential to efficiently find the minimum RMSD for this molecule. Reflective of this, DockRMSD without DEE requires a relatively high amount of time (on average 93.3 ± 0.9 ms per ligand pair) to find the optimal solution, as the only pruning done is the bond-based and duplicate criteria described in the implementation; the atom identity search provides no information due to the symmetry of buckminsterfullerene. The obrms tool prunes more efficiently (on average 59.6 ± 0.9 ms per ligand pair) due to its direct implementation of the VF2 feasibility criteria, but still needs to enumerate through every valid mapping to find the optimal one and thus takes longer to arrive at the optimal answer. However, since DEE prunes mappings based on their cumulative square distance, DockRMSD is able to find the optimal solution within a timeframe that rivals the runtime of obrms on most other molecules (on average 8.7 ± 0.7 ms per ligand pair).
The inability of naïve RMSD calculation to account for molecular symmetry negatively impacts how we evaluate ligand poses generated by protein–ligand docking. In the dataset we analyzed, about two out of every five ligands require some sort of symmetry correction to achieve accurate docking RMSD values, some of which demonstrated an RMSD correction of more than 2.0 Å. While several attempts have been made to address this need, implementations that find mappings without considering atomic connectivity, such as those in DOCK6 and AutoDock Vina, ultimately fail to consider properties of the ligand that are necessary to find the true optimal symmetry-corrected RMSD. While modules from commercial programs like GOLD and Glide are also capable of finding the optimal solution and are likely more convenient if the poses being evaluated were generated from these programs, users who wish to use these programs must purchase a license or install hefty software packages to perform RMSD calculations. Finally, even when compared to analogous open-source modules, such as obrms, DockRMSD has demonstrated much faster calculations in all cases (particularly high-symmetry cases) due to its lightweight implementation. In addition to symmetry correction, the atomic correspondence search of DockRMSD promotes easier comparison between docking programs in benchmarking studies. Ligand poses generated by several programs do not necessarily have direct atomic correspondence, and so DockRMSD could be used as a universal analysis module to ensure all programs are able to be compared and that the comparison is fair.
Despite the ability of DockRMSD to calculate symmetry-corrected RMSD, a few shortcomings of the program remain. For example, DockRMSD requires that the two molecules that are provided are the same molecule due to the atom identity search step. This could potentially be solved through implementation of maximum common substructure searching. However, if the molecule being analyzed is symmetric, a common substructure could potentially correspond to several positions in the molecule, leading to several different potential RMSD values. In addition, DockRMSD currently only evaluates ligand pose distance through docking RMSD because of the popularity of this metric. However, RMSD is far from a perfect metric, particularly because of its inability to capture the conservation of essential protein–ligand interactions that confer high binding affinity. As of now, DockRMSD does not include metrics that address these shortcomings of RMSD because they require the consideration of the protein receptor structure, but future iterations of this software could feasibly incorporate this information along with the typical RMSD calculation.
The datasets generated and analysed during the current study as well as DockRMSD source code are available at the DockRMSD webserver, https://zhanglab.ccmb.med.umich.edu/DockRMSD/.
Tuccinardi T (2009) Docking-based virtual screening: recent developments. Comb Chem High Throughput Screen 12:303–314. https://doi.org/10.2174/138620709787581666
Śledź P, Caflisch A (2018) Protein structure-based drug design: from docking to molecular dynamics. Curr Opin Struct Biol 48:93–102. https://doi.org/10.1016/j.sbi.2017.10.010
Blum A, Böttcher J, Heine A et al (2008) Structure-guided design of C2-symmetric HIV-1 protease inhibitors based on a pyrrolidine scaffold. J Med Chem 51:2078–2087. https://doi.org/10.1021/jm701142s
Lindberg J, Pyring D, Löwgren S et al (2004) Symmetric fluoro-substituted diol-based HIV protease inhibitors: ortho-fluorinated and meta-fluorinated P1/P1′-benzyloxy side groups significantly improve the antiviral activity and preserve binding efficacy. Eur J Biochem 271:4594–4602. https://doi.org/10.1111/j.1432-1033.2004.04431.x
Benach J, Swaminathan SS, Tamayo R et al (2007) The structural basis of cyclic diguanylate signal transduction by PilZ domains. EMBO J 26:5153–5166. https://doi.org/10.1038/sj.emboj.7601918
Trott O, Olson AJ (2010) AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. J Comput Chem 31:455–461. https://doi.org/10.1002/jcc.21334
Allen WJ, Rizzo RC (2014) Implementation of the Hungarian algorithm to account for ligand symmetry and similarity in structure-based design. J Chem Inf Model 54:518–529. https://doi.org/10.1021/ci400534h
Allen WJ, Balius TE, Mukherjee S et al (2015) DOCK 6: impact of new features and current docking performance. J Comput Chem 36:1132–1156. https://doi.org/10.1002/jcc.23905
Kuhn HW (1955) The Hungarian method for the assignment problem. Nav Res Logist Q 2:83–97. https://doi.org/10.1002/nav.3800020109
Munkres J (1957) Algorithms for the assignment and transportation problems. J Soc Ind Appl Math 5:32–38. https://doi.org/10.1137/0105003
Jones G, Willett P, Glen RC et al (1997) Development and validation of a genetic algorithm for flexible docking. J Mol Biol 267:727–748. https://doi.org/10.1006/jmbi.1996.0897
D.A. Case, I.Y. Ben-Shalom, S.R. Brozell, D.S. Cerutti, T.E. Cheatham, III, V.W.D. Cruzeiro TAD, R.E. Duke, D. Ghoreishi, M.K. Gilson, H. Gohlke, A.W. Goetz, D. Greene, R Harris, N. Homeyer YH, S. Izadi, A. Kovalenko, T. Kurtzman, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo DJ, et al. AMBER 2018; 2018
Friesner RA, Banks JL, Murphy RB et al (2004) Glide: a new approach for rapid, accurate docking and scoring. 1. Method and assessment of docking accuracy. J Med Chem 47:1739–1749. https://doi.org/10.1021/jm0306430
Halgren TA, Murphy RB, Friesner RA et al (2004) Glide: a new approach for rapid, accurate docking and scoring. 2. Enrichment factors in database screening. J Med Chem 47:1750–1759. https://doi.org/10.1021/jm030644s
O'Boyle NM, Banck M, James CA et al (2011) Open Babel: an open chemical toolbox. J Cheminform 3:33. https://doi.org/10.1186/1758-2946-3-33
Vento M, Cordella LP, Foggia P, Sansone C (2004) A (sub) graph isomorphism algorithm for matching large graphs. IEEE Trans Pattern Anal Mach Intell 26:1367–1372
Hu J, Liu Z, Yu DJ, Zhang Y (2018) LS-align: an atom-level, flexible ligand structural alignment algorithm for high-throughput virtual screening. Bioinformatics 34:2209–2218. https://doi.org/10.1093/bioinformatics/bty081
RDKit: Open-source cheminformatics. http://www.rdkit.org
DockRMSD: docking pose distance calculation. https://zhanglab.ccmb.med.umich.edu/DockRMSD/
Desmet J, De Maeyer M, Hazes B, Lasters I (1992) The dead-end elimination theorem and its use in protein side-chain positioning. Nature 356:539–542. https://doi.org/10.1038/356539a0
Dunbar JB, Smith RD, Yang CY et al (2011) CSAR benchmark exercise of 2010: selection of the protein-ligand complexes. J Chem Inf Model 51:2036–2046. https://doi.org/10.1021/ci200082t
Pettersen EF, Goddard TD, Huang CC et al (2004) UCSF Chimera—a visualization system for exploratory research and analysis. J Comput Chem 25:1605–1612. https://doi.org/10.1002/jcc.20084
Wong DM, Greenblatt HM, Dvir H et al (2003) Acetylcholinesterase complexed with bivalent ligands related to Huperzine A: experimental evidence for species-dependent protein–ligand complementarity. J Am Chem Soc 125:363–373. https://doi.org/10.1021/ja021111w
munkres - Munkres implementation for Python. http://software.clapper.org/munkres/index.html
Andersson HO, Fridborg K, Löwgren S et al (2003) Optimization of P1–P3 groups in symmetric and asymmetric HIV-1 protease inhibitors. Eur J Biochem 270:1746–1758. https://doi.org/10.1046/j.1432-1033.2003.03533.x
Biela I, Tidten-Luksch N, Immekus F et al (2013) Investigation of specificity determinants in bacterial tRNA-guanine transglycosylase reveals queuine, the substrate of its eucaryotic counterpart, as inhibitor. PLoS ONE 8:e64240. https://doi.org/10.1371/journal.pone.0064240
We would like to thank Dr. Wallace Chan for his critique of this manuscript. Molecular graphics performed with UCSF Chimera, developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, with support from NIH P41-GM103311.
Project Name: DockRMSD.
Project home page: https://zhanglab.ccmb.med.umich.edu/DockRMSD/.
Operating system: Binary in Linux, Source code platform independent.
Programming language: C.
Other requirements: GCC 4.3.1 or higher or compatible Linux operating system.
License: GNU GPL.
The study is supported in part by the National Institute of General Medical Sciences [GM070449, GM083107, GM116960], National Institute of Allergy and Infectious Diseases [AI134678], and the National Science Foundation [DBI1564756].
Department of Computational Medicine and Bioinformatics, University of Michigan, 100 Washtenaw Avenue, Ann Arbor, MI, 48109-2218, USA
Eric W. Bell & Yang Zhang
Eric W. Bell
Yang Zhang
EWB designed and implemented DockRMSD. YZ oversaw the process as a Principal Investigator and was heavily involved in manuscript revision. Both authors read and approved the final manuscript.
Correspondence to Yang Zhang.
Bell, E.W., Zhang, Y. DockRMSD: an open-source tool for atom mapping and RMSD calculation of symmetric molecules through graph isomorphism. J Cheminform 11, 40 (2019). https://doi.org/10.1186/s13321-019-0362-7
Symmetric molecules
Protein–ligand docking
Ligand pose comparison | CommonCrawl |
Home Über uns Fachgebiete Kontakt
[[missing key: search-facet.tree.open-section]] Architektur und Design (13)
[[missing key: search-facet.tree.open-section]] Architektur (1)
Urbanismus (1)
Architektur und Design, andere (13)
[[missing key: search-facet.tree.open-section]] Kulturwissenschaften (10)
[[missing key: search-facet.tree.open-section]] Gattungen und Medien (10)
Allgemeine Gattungs- und Medienforschung (10)
[[missing key: search-facet.tree.open-section]] Geowissenschaften (10)
[[missing key: search-facet.tree.open-section]] Musik (2)
Musik, allgemein (2)
[[missing key: search-facet.tree.open-section]] Sozialwissenschaften (10)
[[missing key: search-facet.tree.open-section]] Soziologie (10)
Sozialstruktur, soziales Leben, Bevölkerung, Sozialanthropologie (10)
Ergebnisse 1 - 10 von 13 :
"argumentation" x
Architektur und Design, andere x
Ergebnisse pro Seite 102050100
Sortieren nach RelevanceTitel - von A bis ZTitel - von Z bis ADatum - Alt nach neuDatum - Neu nach alt
On "Long-Run-Transitions" in Urban Planning Supported by More Advanced Geothermal Energy Utilization in City of Nis, Serbia
In order to give reasons for further advanced multi-faceted research of geothermal utilization, geothermal energy utilization and its integration in urban planning in the long run in the City of Nis were analysed. Regional aspect of this utilization is considered, which is novelty in the planning of the city. The arguments are supported by expert opinions on geothermal utilization acquired abroad and in Serbia that suggest geothermal use within proposals for an eco-city.
in Architecture and Urban Planning
Anonimowy motet Exsultate gaudete laeti omnes kompozycja̧ Gorczyckiego? / The anonymous motet Exsultate gaudete laeti omnes – a composition by Gorczycki?
Tomasz Jasiński
The anonymous motet a cappella Exsultate gaudete laeti omnes preserved in manuscript, catalogue no. Kk I 1, in the Archives of the Krakow Cathedral Chapter, recorded by the provost of the Rorantists' Capella, Józef Tadeusz Benedykt Pȩkalski in the mid18th century, is an extremely intriguing composition as far as the question of its author is concerned. This work exhibits many ties and even overt similarities with the style of an eminent representative of the Polish Baroque, Grzegorz Gerwazy Gorczycki (1665/67-1734). The analysis presented in this article reveals the features that clearly link the anonymous motet with the musical language of this composer (evident inter alia in the melodic pattern, texture, and harmonics) without omitting other elements at the same time, those that would not support Gorczycki's authorship. Thus, although in light of the whole of the observed phenomena Gorczycki's authorship appears probable, there is no conclusive argument to attribute it to him. Despite the absence of a final conclusion that would clearly settle the matter, the author of the study decided to publish the motet together with its analysis, hoping that another scholar might find some overlooked detail, or encounter helpful concordance, etc, which will allow us to definitively confirm, or, on the contrary, to rule out Gorczycki's authorship.
in Annales UMCS, Artes
Chromatics in Polish Music of the Baroque Period. Between Musical Rhetoric and Composer's Invention
Chromatyka w muzyce polskiej epoki baroku. Między muzyczną retoryką a inwencją kompozytorską
The spatial pattern of voter choice homogeneity in the Nigerian presidential elections of the fourth republic
Cletus Famous Nwankwo CDFMR
Takyi et al. (2010) inform us that perhaps in a country of Muslims and Christians, the vote choice of people professing different faiths may differ significantly. Ichino and Nathan (2013) note that the theory of instrumental voting has been used to characterise ethnic voting patterns in various studies in Africa. The argument, however, relates more broadly to the sociological model or what others call structural theories of voting, which emphasise that voters tend to support candidates and parties that are of their sociocultural background ( Heywood, 2007 ). In
in Bulletin of Geography. Socio-economic Series
Determinants of voter turnout in Nsukka Council of Enugu State, South Eastern Nigeria
access to networks, via which people can be drafted for political actions. Among these resources, civic skills are the most vital for swaying political participation. This argument was developed in the Civic Voluntarism Model (CVM) as expounded by Verba et al. (1995) and Putnam (2000) . In the CVM, the gaining of public skills occurs in non-political institutions, such as religious institutions, workplaces and voluntary organisations ( Verba et al. 1995 ). The study draws on these arguments on political participation and includes variables measuring individuals
Selected aspects of water and sewage management in Poland in the context of sustainable urban development
Aleksandra Lewandowska CDFMR and Adam Piasecki CDFMR
.8% of Poland's total population, and as much as 54.5% of the total urban population. The work assumes the hypothesis that positive changes occurred in water and sewage management in the examined cities. The main argument for such a hypothesis was the requirement in Poland to adapt the water and sewage infrastructure to EU requirements. 2 Methods and materials This work uses data from the Local Data Bank of the Central Statistics Office (Bank Danych Lokalnych Głównego Urzędu Statystycznego [BDL GUS]). They were used in the attempt to develop a summative index
US Global Cities as Centres of Attraction of Foreign TNCs
Martin Pilka CDFMR and Nikolay Sluka CDFMR
Detroit. The fact that Washington is the capital of the US presents a key argument for the city to serve as a major corporate hub. Many foreign TNCs open representative offices in Washington to establish public relations, receive government contracts, or enter into direct interaction with government agencies. For example, the initial task of the SAP branch was to commence cooperating with local government and non-governmental organisations. Due to geographical location and deep economic and cultural ties, Miami acquired a reputation as the unofficial capital of Latin
Public spending mechanisms and gross domestic product (GDP) growth in the agricultural sector (1970–2016): Lessons for Nigeria from agricultural policy progressions in China
Temidayo Gabriel Apata CDMR
positively to economic growth, through multiplier effects on aggregate demand. But government consumption may crowd out private investment, dampen economic stimulus in the short run and reduce capital accumulation in the long run ( Coady and Fan, 2008 ). Economy theory of public expenditures is classified into two: productive if they are included as arguments in private production functions, and unproductive if they are not ( Barro and Sala-I-Martin, 1992 ). This categorisation implies that productive expenditures have a direct effect upon the rate of economic growth
Economic and functional changes in the largest villages in Poland at the end of the 20th and the beginning of the 21st century
Dariusz Sokołowski CDFMR
transformation trends in the studied group of units. Two arguments prove that approach inappropriate: 1) the use of non-homogeneous data sources and the consequent composite methods of measurement may lead to ambiguous or uncertain conclusions, and 2) the impact of diverse factors in the form of profound political, economic, cultural and technological transformations, or even changes in social mentality, throughout the three decades separating the studies may make the real transformation trends appear different from the assumptions. Wójcik (2013) , relying mostly on the
Optimal Spatial Allocation of Labour Force and Employment Protection Legislation (EPL)
Carlos Coca Gamito CDFMR and Georgios Baltos CDFMR
)\times \,\text{Max}\,\mathbf{E}\left[ N\,LI\left( v*\left( p \right) \right)\left| EP\,L=P \right. \right]= \\=N\,LI\left( v*\left( p \right) \right)\times P\left[ N\,LI\left( v*\left( p1 \right) \right)\left| EP\,L=P \right. \right] \\\end{matrix}$$ In essence, the variable to maximise is a composition of a multivariate function, which involve two functions as an argument – the maximum NLI and the probability of obtaining it. Additionally, such a probability is determined by the level of employment protection (Blanchard, 2005; Pissarides, 2000 ) while the maximum NLI | CommonCrawl |
Dual Mechanism for the Emergence of Synchronization in Inhibitory Neural Networks
Ashok S. Chauhan1,
Joseph D. Taylor ORCID: orcid.org/0000-0002-9882-97271 &
Alain Nogaret ORCID: orcid.org/0000-0003-0164-58151
During cognitive tasks cortical microcircuits synchronize to bind stimuli into unified perception. The emergence of coherent rhythmic activity is thought to be inhibition-driven and stimulation-dependent. However, the exact mechanisms of synchronization remain unknown. Recent optogenetic experiments have identified two neuron sub-types as the likely inhibitory vectors of synchronization. Here, we show that local networks mimicking the soma-targeting properties observed in fast-spiking interneurons and the dendrite-projecting properties observed in somatostatin interneurons synchronize through different mechanisms which may provide adaptive advantages by combining flexibility and robustness. We probed the synchronization phase diagrams of small all-to-all inhibitory networks in-silico as a function of inhibition delay, neurotransmitter kinetics, timings and intensity of stimulation. Inhibition delay is found to induce coherent oscillations over a broader range of experimental conditions than high-frequency entrainment. Inhibition delay boosts network capacity (ln2)−N-fold by stabilizing locally coherent oscillations. This work may inform novel therapeutic strategies for moderating pathological cortical oscillations.
The synchronization of electrical activity in the brain has been studied for several years to understand the mechanisms underpining cognition1,2 and memory consolidation3. The γ-oscillations of cortical micro-circuits are thought to be initiated by networks of parvalbumin4,5 or somatostatin interneurons6 which entrain principal cells7,8,9. These two neuron sub-classes differ in their physiological characteristics and may have adapted to exploit specific nonlinear properties. An understanding of these properties and their functional advantages is now needed. Computational models have been used to test neuronal synchronization through the interneuron gamma (ING) mechanism10,11, the pyramidal interneuron gamma (PING) mechanism8,12,13, the action of both excitatory and inhibitory synapses14,15,16,17 and the modulation of long range inhibition by local dendritic gap junctions18,19,20,21,22,23, which have been derived from tonic current stimulation. Mutually inhibitory networks, however, are chaotic systems which encode the timings of current stimuli in cyclical paths of sequentially discharging neurons24,25. These networks are therefore expected to exhibit abrupt transitions between modes of oscillation when both the timings and amplitudes of stimuli are varied26,27,28,29. This is reminiscent of phase transitions in systems with many degrees of freedom whose sensitivity to interactions makes them difficult to predict from first principles. Recent advances in neuromorphic engineering30,31 allow such phase transitions to be measured in physical networks and are the only way to integrate complex multivariate stimuli in real time32,33 without compromise on model accuracy, size or complexity. A further merit of using neuromorphic systems is to demonstrate the robustness of the large number of stable modes of oscillation which we observe against noise and network imperfections. In particular, the maximum network capacity is found to be robust against synaptic noise, component-to-component fluctuations and other experimental deviations of relevance to cortical networks. In this way, we establish inhibition delay and high frequency entrainment as dual mechanisms providing robust and tuneable synchronization.
We built analog silicon models of all-to-all neuronal networks. The constituent neurons implemented the Mahowald-Douglas model30 which transposes the conductances of ion channels into transistor conductances to translate the Hodgkin-Huxley model34 to very large scale integrated (VLSI) technology. We interconnected these neurons with mutually inhibitory synapses based on established VLSI circuit design31. These synapses have three gate biases which we set independently or in combination to delay the onset of the postsynaptic current, change the rise and decay time of the postsynaptic current, and vary the synaptic conductance (Supplementary Methods I,II). Accordingly, individual synapses have a tuneable inhibition delay d which we vary from 20 μs to model the latency time of neurotransmitter release35,36, to 800 μs to model the transmission line delay of inhibitory signals as they diffuse along the dendrites towards to the axon hillock of dendrite projecting interneurons37. These inhibition delays are chosen to match the transit time of action potentials across the 200 μm–700 μm long dendrites of somatostatin interneurons37 at an average speed of 1–100 m/s (Fig. 1(a)). The decay (resp. rise) time of the postsynaptic current was set by the undocking (resp. docking) time of neurotransmitters on neuroreceptors (GABA), τu (resp. τd). τu was tuned over 0–8 ms, a range comparable to the period of neuron oscillations: 5–20 ms38 (Fig. 1(b)).
Synchronization of a pair of mutually inhibitory neurons and its dependence on synaptic kinetics. (a) Fast-spiking soma-projecting and somatostatin dendrite-projecting interneurons. Synapses located on dendrites effectively delay the inhibition of the postsynaptic neuron by 0– 800 μs. (b) Inhibitory postsynaptic current (red line) evoked by a presynaptic action potential (black line) applied to a VLSI synapse. Synaptic kinetics: inhibition delay d, neurotransmitter docking time τd, undocking time τu, and spike width W. (c) Membrane voltage oscillations of mutually inhibitory neurons below, at, and above the synchronization current, Is. τu = 1.5 ms. (d) Frequency-current dependence of a VLSI neuron (square symbols) and frequency-current dependence of phase-locked oscillations (red line). Their intercept gives the frequency (fs) and current (Is) of phasic oscillations. Domains of synchronized oscillations at d = 0.2W (vertical bands). (e) Phase diagram of synchronization in the d − Istim plane where delay d is normalised by the spike width W. Two alternative mechanisms contribute to synchronization in local inhibitory networks: a change in current stimulation (PVB: parvalbumin neuron-type synchronization) and an increase in inhibition delay (SST: somatostatin neuron-type synchronization). (f) Frequency of phasic oscillations as a function of the decay time of the postsynaptic current.
Synaptic kinetics of the half-center oscillator
We began to study the emergence of synchronization by probing the synchronization phase diagram of a pair of mutually inhibitory neurons as a function of synaptic kinetics in all connections (d, τu) and current stimulation applied to all neurons (Istim). When inhibition delay is small (\(d < 150\,\mu \)s), three modes of synchronized oscillations are observed as Istim increases (Fig. 1(c)). Above the depolarization threshold (Ith = 8 μA), neurons oscillate out-of-phase (antiphasic synchronization). They suddenly lock in phase (phasic synchronization) at Is = 14 μA. Higher current stimulation (\({I}_{stim} > {I}_{s}\)) increases the frequency of neuron oscillations and makes inhibition increasingly tonic. As a result neurons decouple gradually. This loose coupling regime is characterized by higher order phase locking where a neuron entrain the other at a frequency which is a rational multiple of its own (Fig. 1(c)).
Longer inhibition delays (\(d > 150\,\mu \)s) broaden the synchronization current Is to a window of finite width [IL, IH] (Fig. 1(d)) which increases and eventually diverges at \(d > 300\,\mu \)s. The observation of phasic synchronization at longer inhibition delay concurs with similar results obtained by Van Vreeswijk et al.11 when the synaptic response time becomes slower. Antiphasic, phasic, and loose coupling regimes form 3 domains in the d − Istim phase diagram of Fig. 1(e) showing that phasic synchronization may be induced either by delaying inhibition or by applying a stimulation current close to Is. Delayed inhibition gives each neuron in the pair the time to depolarize prior to receiving inhibition from its partner. This condition is necessary but not sufficient to explain phasic synchronization. Inhibition delay also decreases the slope of the phase response curve of the post-synaptic neuron near the origin (Supplementary Methods II). This reduces the phase correction that mutual inhibition applies to the early and late firing neurons which has the effect of stabilizing synchronous oscillations.
For shorter inhibition delays (\(d < 150\,\mu \)s), the synchronization current (Is) and frequency (fs) decrease when τu increases. This dependency is well explained by calculating the frequency of phase synchronized oscillations fp (Supplementary Discussion I) and its intercept with the excitatory response curve of a neuron (Fig. 1(d)). We find \({f}_{s}\sim {\tau }_{u}^{-\mathrm{1/3}}\) (Fig. 1(f)). This result concurs with the onset of γ-oscillations shifting to lower frequency (current stimulation) following pharmacological manipulations that increase the recovery time of the postsynaptic current7,8.
3-cell mutually inhibitory network
Larger inhibitory networks (\(N\ge 3\)) generally have chaotic dynamics which makes network oscillations highly dependent on the timings of current stimuli. We defined the state of the system using the phase lags of individual neurons relative to a reference (neuron 1) and obtained the state trajectories by measuring the temporal evolution of these phase lags \(\{{{\rm{\Delta }}{\rm{\Phi }}}_{i1}^{(p)}\}\), i = 2, 3 ...N over consecutive periods p = 1–50. The phase lag map of a 3-neuron network with 300 μs inhibition delay shows state trajectories converging towards 6 point attractors (Fig. 2(a)). These attractors are sub-divided into 3 categories according to the duration of their interspike intervals (ISI): T/3, T/2 and T where T is the period of synchronized oscillations (Fig. 2(b)). Two attractors (circle symbols) correspond to three neurons discharging in the clockwise and anticlockwise sequences, 1 → 2 → 3 and 1 → 3 → 2 (ISI = T/3). Three attractors (square symbols) correspond to 3 modes of partially synchronized oscillations including the sequence \(1\to \begin{array}{c}2\\ 3\end{array}\) and its 2 permutations (ISI = T/2). The single coherent attractor (diamond symbol) corresponds to all 3 neurons discharging in phase (ISI = T). The 3-neuron map shows the basins of attraction becoming smaller as oscillations become more coherent. This demonstrates the greater fragility of coherent states relative to the oscillations of sequentially discharging neurons. We find that for \(d > 300\,\mu \)s, coherent and partially coherent oscillations become stable over the entire range of current stimulation. If \(d < 150\,\mu \)s however, the network only supports the oscillations of sequentially discharging neurons, as we shall see below. We find that substituting non-delayed inhibitory synapses (d = 0) with gap junctions29 produces qualitatively similar phase portraits in that they only support sequentially discharging neurons (Fig. 2(c)). For completeness, we also considered gap junctions between excitatory neurons. We find that the excitatory network hosts a single state of collective oscillations (Fig. 2(d)). This expected result validates the correct operation of our analogue network. Returning to the 3-neuron network connected by non-delayed inhibitory synapses, and varying current stimulation applied to all neurons, we find that partially coherent oscillations vanish except in a very narrow range of current stimulation centered on Is - as in the neuron pair.
Phase portraits of 3-neuron inhibitory networks. (a) Experimental phase portrait of a three neuron network coupled via mutually inhibitory synapses. Antiphasic attractors (circle symbols), partially synchronized attractors (square symbols) and phasic attractor (diamond symbol) are the 6 limit cycle oscillations of the network. State trajectories (full lines) emanate from initial states evenly distributed over the entire phase space. Neuron dephasings ΔΦi1 were normalised by the cycle period T. Reciprocal inhibition was balanced \({g}_{ij}\approx {g}_{ji}=2\,\,\mu \)S with i, j = 1, 2, 3. (b) Transient neuron oscillations showing convergence towards the antiphasic attractor (ISI = T/3), the partially synchronized attractor (ISI = T/2), and the phasic attractor (ISI = T). (c) Phase portrait of a 3-neuron network interconnected with mutually inhibitory gap junctions showing antiphasic attractors only (circle symbols). \({g}_{ij}\approx {g}_{ji}=45\,\,\mu \)S. (d) If mutually excitatory gap junctions are used instead, a single phasic attractor is observed (diamond symbol). Parameters: (a,b) Istim = 25 μA, T = 18 ms, Ith = 8 μA, \({g}_{ij}^{(s)}=2\,\mu \)S, τu = 1.5 ms, τd = 1.5 ms, d = 300 μs; (c,d) Istim = 50 μA, Ith = 86 μA.
In the 3-neuron and 4-neuron networks, the synchronization current Is is the current that maximises the size of the coherent basin of attraction and stabilizes the coherent attractor with respect to noise (Fig. 3). For long inhibition delays (d = 350 μs), the network supports coherent oscillations over the entire range of current stimulation. When \(d < 150\,\mu \)s, coherent oscillations only form in a narrow range of current stimulation about Is. These observations generalize the d − Istim phase diagram of Fig. 1(e) to larger networks and demonstrate that synchronization may be achieved either through increases in inhibition delay or current stimulation.
Current dependence of the coherent attractor. Phase lag maps of the 3-neuron and 4-neuron inhibitory networks measured in the vicinity of the coherent attractor (yellow basin) at three levels of current stimulation: Istim = 20 μA, 30 μA and 44 μA. Vicinal basins of partially synchronized oscillations (grey, purple and blue trajectories) and antiphasic oscillations (red and black trajectories). The volume of the coherent basin passes through a maximum at \({I}_{s}\approx 30\,\mu \)A. Parameters: d = 350 μA, τu = 1.5 ms.
Emergence of synchronization in all-to-all inhibitory networks
We next demonstrate the emergence of synchronization in larger networks (N = 3, 4, 5) and the critical importance of inhibition delay in stabilizing locally coherent oscillations. The maximum number of attractors in a N-neuron network was calculated by counting the number of cyclically invariant discharge patterns allowing partial synchronization (Supplementary Discussion II). We find that the maximum network capacity increases as T3 = 6, T4 = 26, T5 = 150, T6 = 1082, … \({T}_{N}\sim (N-1)!/{(\mathrm{ln}2)}^{N}\)39. The minimum capacity, allowing sequential discharges only, is LN = (N − 1)!
Experimental results show that the capacity of an inhibitory network to encode information about its environment lies between LN and TN, depending on inhibition delay (Fig. 4). Longer inhibition delays (d = 400 μs) stabilize oscillations which range from purely phasic (Fig. 4: (a) diamond, (b) triangle, (c) hexagon) to purely sequential (Fig. 4(a–c) circles). In between, all intermediate states of partial synchronization are observed (Fig. 4(a–c)). For example, the 4-neuron map in Fig. 4(b) has 6 sequential attractors with 1 spike per ISI giving ISI occupancies (1, 1, 1, 1) (circle symbols), 12 partially synchronized attractors with ISI occupancies (2, 1, 1, 0) (square symbols), 4 + 3 partially synchronized attractors with (3, 1, 0, 0) and (2, 2, 0, 0) occupancies respectively (diamond symbols), and the coherent attractor (4, 0, 0, 0) (triangle symbol). Therefore the 4-neuron network hosts 26 attractors in total.
Emergence of synchronization in small inhibitory networks and its dependence on inhibition delay. Phase lag maps of the 3, 4 and 5-neuron networks measured at inhibition delays (a–c) d = 400 μs, (d–f) d = 250 μs and (g–i) d = 100 μs while keeping constant both the decay time of the postsynaptic current: τu = 1.5 ms and the inhibition peak current: −13.8 μA. The (N − 1)-dimensional phase space (straight lines) and the state trajectories within it (full lines) were projected orthographically. State trajectories converge towards point attractors classified according to the duration of their ISIs: T/N (black lines, circle attractors), T/(N − 1) (blue lines, square attractors), T/(N − 2) (orange lines, diamond attractors), T/(N − 3) (green lines, triangular attractors), T/(N − 4) (purple lines, hexagonal attractor). The total number of attractors observed at inhibitory delay d = 400/250/100 μs is 6/3/2 (N = 3), 26/17/6 (N = 4), 142/107/24 (N = 5), 1053/688/120 (N = 6).
Intermediate inhibition delay (d = 250 μs) suppresses coherent oscillations (Fig. 4(d–f)). In the 4-neuron network, the coherent attractor (ISI = T) and the partially coherent attractors (ISI = T/2) have vanished while those with ISI = T/3 (square symbols) and T/4 (circle symbols) remain. The partially coherent attractors which survive exhibit a reduced basin size (Fig. 4(d,f)).
When inhibition delay is reduced further (d = 100 μs), the only attractors left are sequential oscillations (Fig. 4(g–i)). The network capacity then scales as: 2 (N = 3), 6 (N = 4), 24 (N = 5) which matches the LN sequence above. These results demonstrate that, provided the inhibition delay is sufficiently large, the number of attractors increases according to sequence TN. For this, the inhibition delay needs to be at least 1/3 of the duration of the action potential (\(d > W\mathrm{/3}\)). The network capacity was found to be less sensitive to neurotransmitter kinetics. Increasing τu from 1.5 ms to 3.5 ms marginally increased the number of attractors. No further change was observed beyond \({\tau }_{u} > 3.5\) ms.
Figure 5 shows how the capacity of experimental networks scales with network size. At small inhibition delay (d = 100 μs), the experimentally observed capacity is minimum and follows sequence LN. At longer inhibition delay (d = 400 μs), one observes that the maximum number of attractors increases according to sequence TN. At intermediate delays, the network supports partially synchronized oscillations with low coherence which includes all oscillations exhibiting the smaller ISIs. Hence the network capacity lies between LN and TN. One concludes that longer inhibition delays (\(d > 300\,\mu \) s) boost the capacity to encode stimuli by a factor \({T}_{N}/{L}_{N}={(\mathrm{ln}\mathrm{2)}}^{-N}\). With a maximum capacity of (N − 1)!/(ln2)N delayed inhibitory networks achieve a storage density which far exceeds winnerless networks \(\sim (N-1)!\)24 and Hopfield networks \(\sim 0.14N\)40. By achieving the maximum theoretical capacity, our in-silico networks demonstrate scalable associative memories with unprecedented memory density.
Scaling of network capacity with network size. Total number of attractors observed in the 3-neuron to 6-neuron networks at three different values of the inhibition delay: d = 400 μs (red dots), 250 μs (blue triangles), 100 μs (green diamonds). At intermediate delay (250 μs), the network capacity lies between the upper theoretical boundary TN (solid line) and the lower boundary LN (dashed line). Inset: Orthographic projections of point attractors which are distinguished by the number of ISIs per cycle: ISI = T/N (black dots), T/(N − 1) (blue dots), T/(N − 2) (orange dots), T/(N − 3) (green dots), T/(N − 4) (purple dot).
Our results suggest that inhibitory networks may synchronize via two mechanisms that exploit the distinct neurophysiological properties of fast-spiking interneurons36 and the inhibition delay introduced by dendrite projecting synapses37. This study considers the primary effect of dendrite targeting synapses to be the introduction of a transmission line delay because the network frequency covers a very narrow range set by the constant step amplitude of current stimuli. The complex spectral response of dendrites is however known to be important and would need to be considered if the amplitude of current stimulation was varied. Dendrite projecting somatostatin interneurons introduce transmission line delays of the order of 0 – 800 μs by projecting synapses on the 200–700 μm long dendrites of the mammalian visual cortex37. Transmission line delays of this magnitude postpone the onset of inhibition sufficiently to stabilize the coherent oscillations of inhibitory neurons (Fig. 4(a–c)). The anatomical properties of somatostatin neurons would thus warrant robust phasic synchronization which is weakly dependent on current stimulation or postsynaptic kinetics but is strongly dependent on the timings of stimulation. This result is consistent with the rapid attenuation of visually induced γ-oscillations observed when visual stimuli become uncorrelated6. The coherent attractor is unique and its basin occupies a very small volume of phase space (triangle symbol, Fig. 4). As a result, the state of collective synchronization is the least robust of all states with respect to noise and structural inhomogeneity. In contrast, the bulk of the phase space is filled with partially coherent attractors whose proportion increases very rapidly according to 1 − (ln2)N as the network size increases. Using this expression, one calculates that partially coherent attractors form \( > \mathrm{98.7 \% }\) of all attractors for the typical neuronal population, \(N > 12\), excited during optogenetic experiments6. Besides being more numerous, partially coherent states also have wider basins which offer protection from decoherence by noise and structural heterogeneities (Fig. 4(a–c)). Accordingly, partially coherent states are the most thermodynamically stable with respect to coherent and sequential states and are the most likely to support synchronized electrical activity in the noisy environment of real cortical networks. Within partially coherent states, however, the neurons which oscillate in phase may distribute differently over the volume of the network. A subset of L neurons (\(L < N\)) may oscillate in phase at different locations of the network, producing spatially homogeneous firing akin to the fully synchronized state. Two partially coherent states with identical L-number differ through the permutations of stimuli. The equivalence of these states is demonstrated by the six-fold symmetry of phase maps of the 4-neuron network (Fig. 4(b)).
Our results suggest that spatially homogeneous firing within partially coherent states may be promoted by local repulsion through gap junctions41. These junctions are known to predominantly couple neighbouring inhibitory cells of the same population42,43. As we have seen in Figs 2(c) and 4(g), gap junctions and fast inhibitory synapses share the property of supporting sequential neuronal oscillations. Electrical synapses thus have a destabilizing effect on local neural synchronization as reported in earlier numerical simulations21,23,44. At the same time, Fig. 4 show that transmission line delays promotes synchrony. An inhibitory network can thus achieve a homogeneous distribution of phasic neurons45 by breaking local coherence using gap junctions. Homogeneous firing is established from the long range attraction of delayed inhibition and the short range repulsion of electrical synapses. Note that many physical systems achieve long range order through short range repulsion. For example, the Wigner crystal arises from Coulomb repulsion between electrons46 and vortex-to-vortex repulsion is responsible for the Abrikosov lattice in type II superconductors47. The effect of introducing heterogeneity in the network is seen in Fig. 4(a–c) where residual imbalance in network conductance breaks the symmetry of phase lag maps. Introducing a range of inhibition delays or mixing gap junctions with chemical synapses would similarly increase the volume of some basins - those associated with spatially homogeneous firing - to the detriment of others26.
In contrast to somatostatin neurons, the wiring of parvalbumin neurons introduces delays which are too short to warrant automatic synchronization. Instead parvalbumin neurons may achieve synchronization through high frequency entrainment. This corresponds to the current induced synchronization which we observe at small d (Fig. 1(c,e)). Because frequency fs is dependent on neurotransmitter kinetics (Fig. 1(f)), this synchronization mechanism allows the onset of synchronized oscillations to be tuned using pharmacological manipulations targeting GABA receptors4,5,7,9,48.
Our study leads us to propose that local cortical circuits may have adapted to exploit the robustness of synchronization by delayed inhibition versus the tunability of synchronization by fast-spiking interneurons (Fig. 1(e)). These synchronization mechanisms suggest strategies to reduce pathological cortical oscillations which include: inactivating dendrite targeting synapses, blocking GABAB receptors to accelerate the recovery of the postsynaptic potential, and applying visual stimuli lacking spatial coherence at frequencies in the γ band. This study has focussed on purely inhibitory networks (ING) which have intrinsically chaotic dynamics. The consideration of excitatory neurons and feed-forward processes within the pyramidal-interneuron-gamma (PING) mechanism invokes regular dynamics which has been treated elsewhere8.
Electronic models
We synthesized two VLSI networks interconnecting 6 Mahowald-Douglas neurons30 with either inhibitory synapses or gap junctions (Supplementary Methods I). VLSI neurons modelled the dependence of the membrane voltage V on current stimulus Istim using the analogue electrical equivalent circuit of the neuron membrane. Its equation was \(C\dot{V}={g}_{Na}({E}_{Na}-V)+{g}_{K}({E}_{K}-V)+{g}_{L}V+{I}_{stim}\) where ENa and EK are the sodium and potassium reversal potentials and C is the membrane capacitance. The sodium and potassium conductances, gNa and gK, are modelled by the transconductances of p− and n− type field effect transistors respectively30. The gate variables m, h and n of the Hodgkin-Huxley model are represented in the analogue circuit by currents ι which are either activated or inactivated according to: \(\iota ({V}_{\tau ,x})={\iota }_{max}\{1+\,\tanh \,[({V}_{\tau ,x}-{V}_{x})/d{V}_{x}]\}/2\) where x ≡ {m, h, n}, Vx is the threshold voltage of each ion gate, and dVx is the width of the transition from the closed to the open state of that gate. The Vτ,x variables follow a first order dynamics \({\dot{V}}_{\tau ,x}=(V-{V}_{\tau ,x})/\tau x\) which describes the recovery of each gate variable and is characterized by recovery time τx29.
Chemical synapses were implemented using a differential pair integrator31 (Supplementary Methods II). As our transistors functioned with above threshold currents as opposed to below threshold31, the postsynaptic current was approximately given by Ipost(t) = gS(t)(Vpost(t) − Vrev) where Vrev = 7 V was the reversal potential, Vpost(t) the membrane voltage of the postsynaptic neuron, g the maximum conductance and S(t) was the fraction of docked neurotransmitters at time t. The neurotransmitter docking rate was given by: \(\dot{S}(t)=[{S}_{\infty }({V}_{pre}(t))-S(t)]/{\tau }_{u}\) with \({S}_{\infty }(V)=0.5\{1+\,\tanh \,[(V-{V}_{th})/d{V}_{syn}]\}\). The empirical inhibition delay d, decay time τu and synaptic conductance g were controlled by 3 gate voltage parameters: Vth, VW and Vτ in the circuit (Supplementary Methods II). The synaptic conductance varied in the range g = 1–3 μs.
We implemented gap junctions electronically using a differential transconductance amplifier to model electrical coupling between GABAergic-like interneurons41. Their current-voltage transfer characteristics has been measured by Zhao and Nogaret29. The gap junction current varies linearly as \({I}_{post}=g^{\prime} ({V}_{post}(t)-{V}_{pre}(t))\) near the balance point of the pre-synaptic and post synaptic membrane potentials41. The transconductance \(g^{\prime} \) is tuneable in the range 24 μs \( < g^{\prime} < 45\,\mu \)S using the gate bias VM of the current source transistor (Fig. S7). Away from the balance point, saturation effects reduce the rate of current injection29. We were able to change the sign of the injected current by swapping the voltage inputs and in this way obtain either an inhibitory or an excitatory link (Fig. 2(d)).
Circuits were built from VLSI current mirrors (ALD1116, ALD117). The depolarization threshold of neurons was adjusted to match the range of synaptic currents. This was done by adjusting the leakage conductance of the neuron membrane. The current thresholds were Ith = 8 μA (synaptic coupling) and 86 μA (gap junction coupling). The duration of an action potential was W = 1 ms.
Data acquisition and analysis
Individual neurons were stimulated by timed current steps of constant amplitude Istim. These stimuli were generated by the analogue outputs of two DAQ cards (NI PCI6259) and a bank of 6 voltage-to-current converters. Labview code was written to vary the timings of current stimuli in a systematic manner so that initial conditions meshed the (N − 1)-dimensional phase space with a grid size of T/20. The Labview/DAQ card recorded the membrane voltage time series of individual neurons during each current protocol. The sampling frequency was 20 kHz. Between the end of one protocol and the beginning of the next, a 200 ms long time window was inserted during which no stimulation was applied to let the system return to its steady state.
The dephasings of voltage peaks \(({{\rm{\Delta }}{\rm{\Phi }}}_{21}^{(p)},{{\rm{\Delta }}{\rm{\Phi }}}_{31}^{(p)},\,\mathrm{...}{{\rm{\Delta }}{\rm{\Phi }}}_{N1}^{(p)})\) were calculated in each oscillation period p = 1–50. The phase shifts of individual neurons were calculated as \({{\rm{\Delta }}{\rm{\Phi }}}_{i1}^{(p)}=({t}_{i}^{(p)}-{t}_{1}^{(p)})/T\) using a Matlab programme which extracted the timings of voltage peaks of neuron i and neuron 1 in each oscillation period. The state trajectories ΔΦ(p) were projected orthographically in the Coxeter plane of the (N − 1)-dimensional hypercube (N = 3, 4, 5) using projection matrices:
$${\hat{{\boldsymbol{P}}}}_{4N}=(\begin{array}{ccc}-\sqrt{2}\,\cos \,{\theta }_{4} & \sqrt{2}\,\sin \,{\theta }_{4} & 1\\ \sqrt{2}\,\sin \,{\theta }_{4} & -\sqrt{2}\,\cos \,{\theta }_{4} & 1\end{array}),$$
where θ4 = π/12, and:
$${\hat{{\boldsymbol{P}}}}_{5N}=(\begin{array}{cccc}1 & \cos \,{\theta }_{5} & 0 & -\,\cos \,{\theta }_{5}\\ 0 & \sin \,{\theta }_{5} & 1 & \sin \,{\theta }_{5}\end{array}),$$
where θ5 = π/4. The state trajectories pertaining to the same basin were regrouped using Matlab code which calculated the coordinates of experimental attractors and their total number.
Yamamoto, J., Suh, J., Takeuchi, D. & Tonegawa, S. Successful execution of working memory linked to synchronized high-frequency gamma oscillations. Cell 157, 845–857 (2014).
Article PubMed CAS Google Scholar
Ward, L. M. Synchronous neural oscillations and cognitive processes. Trends in Cognitive Sciences 7, 553–559 (2003).
Singer, W. Synchronization of cortical activity and its putative role in information processing and learning. Annual review of physiology 55, 349–374 (1993).
Sohal, V. S., Zhang, F., Yizhar, O. & Deisseroth, K. Parvalbumin neurons and gamma rhythms enhance cortical circuit performance. Nature 459, 698–702 (2009).
ADS Article PubMed PubMed Central CAS Google Scholar
Cardin, J. A. et al. Driving fast-spiking cells induces gamma rhythm and controls sensory reponses. Nature 459, 663–667 (2009).
Veit, J., Hakim, R., Jadi, M. P., Sejnowski, T. J. & Adesnik, H. Cortical gamma band synchronization through somatostatin interneurons. Nature Neuroscience 20, 951–959 (2017).
Article PubMed PubMed Central CAS Google Scholar
Whittington, M. A., Traub, R. D. & Jefferys, J. G. Sychronized oscillations in interneuron networks driven by metabrotropic glutamate receptor activation. Nature 373, 612–615 (1995).
ADS Article PubMed CAS Google Scholar
Traub, R. D., Wittington, M. A., Stamford, I. M. & Jefferys, J. G. A mechanism for generation of long-range synchronous oscillations in the cortex. Nature 383, 621–624 (1996).
Bartos, M., Vida, I. & Jonas, P. Synaptic mechanisms of synchronized gamma oscillations in inhibitory interneuron networks. Nature Review Neuroscience 8, 45–56 (2007).
Wang, X.-J. & Buzsáki, G. Gamma oscillations by synaptic inhibition in a hippocampal interneuronal network model. Journal of Neuroscience 16, 6402–6413 (1996).
van Vreeswijk, C., Abbott, L. F. & Ermentrout, G. B. When inhibition not excitation synchronizes neural firing. Journal of Computational Neuroscience 1, 313–321 (1994).
Whittington, M. A., Traub, R. D., Kopell, N., Ermentrout, B. & Buhl, E. H. Inhibition-based rhythms: experimental and mathematical observations on network dynamics. International Journal of Psychophysiology 38, 315–336 (2000).
Tiesinga, P. & Sejnowski, T. J. Cortical enlightenment: are attentional gamma oscillations driven by ing or ping? Neuron 63, 727–732 (2009).
Börgers, C. & Kopell, N. Synchronization in networks of excitatory and inhibitory neurons with sparse random connectivity. Neural Computation 15, 509–538 (2003).
Article PubMed MATH Google Scholar
White, J. A., Chow, C. C., Ritt, J., Soto-Treviño, C. & Kopell, N. Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons. J. Comput. Neurosci. 5, 5–16 (1998).
Article PubMed MATH CAS Google Scholar
Destexhe, A., Contreras, D., Sejnowski, T. J. & Steriade, M. A model of spindle rhythmicity in the isolated thalamic reticular nucleus. Journal of Neurophysiology 72, 803–818 (1994).
Elson, R. C., Selverston, A. I., Abarbanel, H. D. I. & Rabinovich, M. I. Inhibitory synchronization of bursting in biological neurons: Dependence on synaptic time constant. J. Neurophysiol. 88, 1166–1176 (2001).
Kopell, N. & Ermentrout, B. Chemical and electrical synapses perform complementary roles in the synchronization of interneuronal networks. PNAS 101, 15482–15487 (2004).
Hjort, J., Blackwell, K. T. & Hellgren Kotaleski, J. Gap junctions between striatal fast-spiking interneurons regulate spiking activity and synchronization as a function of cortical activity. Journal of Neuroscience 22, 5276–5286 (2009).
Traub, R. D. et al. Gap junctions between interneuron dendrites can enhance synchrony of gamma oscillations in distributed networks. Journal of Neuroscience 21, 9478–9486 (2001).
Lewis, T. J. & Rinzel, J. Dynamics of spiking neurons connected by both inhibitory and electrical coupling. Journal of Computational Neuroscience 14, 283–309 (2003).
Gibson, J. R., Beierlein, M. & Connors, B. W. Functional properties of electrical synapses between inhibitory interneurons of neocortical layer 4. Journal of Neurophysiology 93, 467–480 (2005).
Pfeuty, B., Mato, G., Golomb, D. & Hansel, D. Electrical synapses and synchrony: the role of intrinsic currents. Journal of Neuroscience 23, 6280–6294 (2003).
Rabinovich, M. et al. Dynamical encoding by networks of competing neuron groups: Winnerless competition. Physical Review Letters 87, 068102 (2001).
Korn, H. & Faure, P. Is there chaos in the brain? ii. experimental evidence and related models. Comptes Rendus Biologies 326, 787–840 (2003).
Wojcik, J., Schwabedal, J., Clewley, R. & Shilnikov, A. L. Key bifurcations of bursting polyrhythms in 3-cell central pattern generators. PLoS ONE 9, e92918 (2014).
ADS Article PubMed PubMed Central Google Scholar
Shilnikov, A., Calabrese, R. L. & Cymbalyuk, G. Mechanism of bistability: Tonic spiking and bursting in a neuron model. Physical Review E 71, 056214 (2005).
ADS MathSciNet Article CAS Google Scholar
Canavier, C. C., Baxter, D. A., Clark, J. W. & Byrne, J. H. Control of multistability in ring circuits of oscillators. Biological Cybernetics 80, 87–102 (1999).
Zhao, L. & Nogaret, A. Experimental observation of multistability and dynamic attractors in silicon central pattern generators. Physical Review E 92, 052910 (2015).
ADS Article CAS Google Scholar
Mahowald, M. & Douglas, R. A silicon neuron. Nature 354, 515–518 (1991).
Bartolozzi, C. & Indiveri, G. Synaptic dynamics in analog VLSI. Neural computation 19, 2581–2603 (2007).
O'Callaghan, E. L. et al. Utility of a novel biofeedback device for within-breath modulation of heart rate in rats: A quantitative comparison of vagus nerve vs. right atrial pacing. Frontiers in Physiology 7 (2016).
Nogaret, A. et al. Silicon central pattern generators for cardiac diseases. Journal of Physiology 593, 763–774 (2015).
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117, 500 (1952).
Chow, R. H., Klingauf, J. & Neher, E. Time course of Ca2+ concentration triggering exocytosis in neuroendocrine cells. Proc. Nat. Acad. Sci. 91, 12765–12769 (1994).
Rainnie, D. G., Mania, I., Mascagni, F. & McDonald, A. J. Physiological and morphological characterization of parvalbumin-containing interneurons of the rat basolateral amygdala. The Journal of Comparative Neurology 498, 142–161 (2006).
Ma, Y., Hu, H., Berrebi, A. S., Mathers, P. H. & Agmon, A. Distinct subtypes of somatostatin-containing neocortical interneurons revealed in transgenic mice. The Journal of Neuroscience 26, 5069–5082 (2006).
Rodrigues, S. et al. Time-coded neurotransmitter release at excitatory and inhibitory synapses. Proceedings of the National Academy of Sciences 113, E1108–E1115 (2016).
Nogaret, A. & King, A. Inhibition delay increases neural network capacity through stirling transform. Physical Review E 97, 030301 (2018).
ADS Article PubMed Google Scholar
Amit, D. J., Gutfreund, H. & Sompolinsky, H. Storing infinite numbers of patterns in a spin-glass model of neural networks. Physical Review Letters 55, 1530–1533 (1985).
Galarreta, M. & Hestrin, S. Spike transmisison and synchrony detection of GABAergic interneurons. Science 292, 2295 (2001).
Gibson, J. R., Beierlein, M. & Connors, B. W. Two networks of electrically coupled inhibitory neurons in neocortex. Nature 402, 75–79 (1999).
Amitai, Y. et al. The spatial dimensions of electrically coupled networks of interneurons in the neocortex. Journal of Neuroscience 22, 4142–4152 (2002).
Chow, C. C. & Kopell, N. Dynamics of spiking neurons with electrical coupling. Neural Computation 12, 1643–1648 (2000).
Le Van Quyen, M. et al. High-frequency oscillations in human and monkey neocortex during the wake-sleep cycle. PNAS 113, 9363–9368 (2016).
Wigner, E. On the interaction of electrons in metals. Physical Review 46, 1002–1011 (1934).
ADS Article MATH CAS Google Scholar
Abrikosov, A. A. On the magnetic properties of superconductors of the second group. Soviet Physics JETP 5, 1174–1182 (1957).
Huntsman, M. M., Porcello, D. M., Homanics, G. E., DeLoery, T. M. & Huguenard, J. R. Reciprocal inhibitory connections and network synchrony in the mammalian thalamus. Science 283, 541–543 (1999).
We thank H. Adesnik and J.F.R. Paton for valuable discussions. This work was supported by the European Union's Horizon 2020 Future Emerging Technologies Programme (Grant No. 732170) and the British Heart Foundation under grant NH/14/1/30761. JDT acknowledges the support of EPSRC for a DTP studentship.
Department of Physics, University of Bath, Bath, BA2 7AY, UK
Ashok S. Chauhan, Joseph D. Taylor & Alain Nogaret
Ashok S. Chauhan
Joseph D. Taylor
Alain Nogaret
A.N. and A.S.C. conceived the experiments. A.S.C. and J.D.T. conducted the experiments and analysed the data. A.N. conceived the theory and wrote the manuscript. All authors discussed the results and reviewed the manuscript.
Correspondence to Alain Nogaret.
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Chauhan, A.S., Taylor, J.D. & Nogaret, A. Dual Mechanism for the Emergence of Synchronization in Inhibitory Neural Networks. Sci Rep 8, 11431 (2018). https://doi.org/10.1038/s41598-018-29822-8
Optimal solid state neurons
Kamal Abu-Hassan
Nature Communications (2019) | CommonCrawl |
Loading [Contrib]/a11y/accessibility-menu.js
Data Management and Information
Financial and Statistical Methods
Ratemaking and Product Information
Sorry, something went wrong. Please try your search again.
Enter the URL below into your favorite RSS reader.
https://variancejournal.org/feed
Vol. 14, Issue 2, 2021November 05, 2021 EDT
Approximating the Aggregate Loss Distribution
Dmitry E. Papush, Aleksey S. Popelyukhin, Jasmine G. Zhang,
aggregate loss collective risk model compound distribution simulation gamma distribution aggregate models
Photo by Matt Palmer on Unsplash
Papush, Dmitry E., Aleksey S. Popelyukhin, and Jasmine G. Zhang. 2021. "Approximating the Aggregate Loss Distribution." Variance 14 (2).
Save article as...▾
Citation (BibTeX)
Data Sets/Files (12)
Download all (12)
Table 1. Distributions used for the approximation of aggregate loss
Table 2a. Distributions used for casualty products
Table 2b. Distributions used for property, non-catastrophe
Figure 1. 99th percentile simulation error
Table 3. Higher moments (skewness and excess kurtosis) in terms of CV
Figure 2. Skewness/CV as a function of primary limit
If this problem reoccurs, please contact Scholastica Support
Aggregate loss distributions have extensive applications in actuarial practice. Several approaches have been suggested to estimate the aggregate loss distribution, including the Heckman-Meyers method, the Panjer algorithm, and fast Fourier transformation, to name a few. All of these methods rely on separate assumptions about frequency and severity components of the aggregate losses. Quite often, however, obtaining frequency and severity expectations independently is not practical, and only aggregate information is available for analysis. In that case, the a priori assumption about the shape of the aggregate loss distribution becomes critical, especially for assessing the probability of very high aggregate loss values, in the tail.
In this work we seek to determine which statistical two-parameter distribution, out of several, serves best to approximate aggregate loss distributions for property and casualty products. We focus on ground-up losses limited by a per occurrence limit. These results are relevant for quota share agreements. In addition, we consider layer losses, the results of which are important for umbrella quota share transactions.
We simulate samples of aggregate loss, fit statistical distributions to the samples, and then use goodness-of-fit tests to determine the best-fitting distribution. In all realistic scenarios with limited losses, we find that the gamma distribution uniformly provides the most reasonable approximation to the aggregate loss.
Aggregate loss distributions have extensive applications in actuarial practice. The modeling of aggregate losses is a fundamental aspect of actuarial work, as it bears on business decisions regarding many aspects of insurance and reinsurance contracts. Our purpose in this study is to determine the best statistical distribution with which to approximate the aggregate loss distribution for property and casualty business with applications to quota share agreements.
When separate data on loss frequency and loss severity distributions is available, actuaries can approximate the aggregate loss distribution using such methods as the Heckman-Meyers method (Heckman and Meyers 1983), the Panjer method (Panjer 1981), fast Fourier transform (Robertson 1992), and stochastic simulations (Mohamed, Razali, and Ismail 2010). However, sometimes only aggregate information is available for analysis. In such a case, the choice of the shape of the aggregate loss distribution becomes very important, especially in the "tail" of the distribution. The tail is often the part of the distribution that is most affected by policy limits, and a failure to model the tail correctly can lead to an overestimation of the discount given to loss ratios due to the application of policy limits.
Previously published papers debate the appropriateness of various aggregate loss distributions. Dropkin (1964) and Bickerstaff (1972) showed that the lognormal distribution closely approximates certain types of homogenous loss data. Pentikäinen (1987) suggested an improvement of the normal approximation using the so-called NP method. He compared this method with the gamma approximation and concluded that both methods yield reasonable approximations when the skewness of aggregate losses is less than 1, but that neither method is accurate when the skewness is greater than 1. Venter (1983) suggested the transformed gamma and transformed beta distributions for the approximation of aggregate loss, while Chaubey, Garrido, and Trudeau (1998) suggested the inverse Gaussian distribution.
Papush, Patrik, and Podgaits (2001) analyzed several simulated samples of aggregate losses and compared the fit of the normal, lognormal, and gamma distributions to the simulated data in the tails of the distributions. This was a deviation from previous research, which was based solely on theoretical considerations. In all seven scenarios Papush, Patrik, and Podgaits tested, the gamma distribution performed the best. Therefore, they recommended the gamma as the most appropriate approximation of aggregate loss.
This research expands upon that 2001 study but retains the same general approach.
2. Method
2.1. Overview of the study
Initially, proceeding as in Papush, Patrik, and Podgaits (2001), we limit our consideration to two-parameter probability distributions. Three-parameter distributions often provide a better fit, but observed data is often too sparse to reliably estimate a third parameter. We compare the fit of five candidate distributions, shown in Table 1.
Table 1.Distributions used for the approximation of aggregate loss
Probability density function
Normal \(\mu\) — location
\(\sigma > 0\) — scale \(\frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x - \mu)^2}{2\sigma^2}}\) \(\mu\) \(\sigma^2\)
Logistic \(\mu\) — location
\(s > 0\) — scale \(\frac{e^{-\frac{x-\mu}{s}}}{s \left( 1 + e^{-\frac{x-\mu}{s}} \right)^2} = \frac{1}{4s} \text{ sech}^2 \left(\frac{x - \mu}{2s} \right)\) \(\mu\) \(\frac{s^{2}\pi^{2}}{3}\)
Gamma \(\alpha > 0\) — shape
\(\beta > 0\) — rate \(\frac{\beta^{\alpha} x^{\alpha - 1} e^{-\beta x}}{\Gamma (\alpha)}\) \(\frac{\alpha}{\beta}\) \(\frac{\alpha}{\beta^{2}}\)
Inverse gauss \(\mu > 0\) — location
\(\lambda > 0\) — shape \(\left\lbrack \frac{\lambda}{2\pi x^3} \right\rbrack^{1/2} \exp \left\{ \frac{-\lambda (x - \mu)^2}{2\mu^2 x} \right\}\) \(\mu\) \(\frac{\mu^{3}}{\lambda}\)
Lognormal \(\mu\) — scale
\(\sigma > 0\) — shape \(\frac{1}{x} \cdot \frac{1}{\sigma \sqrt{2\pi}} \exp \left( - \frac{(\ln x - \mu)^2}{2\sigma^2} \right)\) \(e^{(\mu + \frac{\sigma^{2}}{2})}\) \(e^{(2\mu + \sigma^2)}(e^{\sigma^2} - 1)\)
Our analytic procedure is summarized in the following formal steps:
Choose frequency distribution and obtain severity distributions from Insurance Services Office's (ISO's) circulars or loss submission data.
Simulate the number of claims (N) and the individual loss amounts \((X_{1}, \ldots, X_{N}),\) put the individual loss amounts into per occurrence layers \((X_{1}^{l} \ldots, X_{N}^{l}),\) and calculate the corresponding aggregate loss \((S^{l} = \sum_{i = 1}^{N}X_{i}^{l})\) in each layer \(l.\)
Repeat this analysis many times (50,000) to obtain a sample of aggregate loss.
Fit the parameters of different candidate probability distributions.
Test the goodness of fit of the distributions and compare results.
2.2. Choice of software
We used the open-source statistical software tool R (R Core Team 2014) to perform our analysis. R is widely used in different actuarial contexts and is also popular in the sciences and social sciences.
In addition to the base version, R allows users to develop packages of functions and compiled code and upload them to the Comprehensive R Archive Network (CRAN). These packages can be freely downloaded for use. In our project, we used packages lhs (Carnell 2016), fitdistrplus (Delignette-Muller and Dutang 2015), NORMT3 (Nason 2012), gsl (Hankin 2006), actuar (Dutang, Goulet, and Pigeon 2008), and e1071 (Meyer et al. 2017).
2.3. Selection of frequency distribution
In actuarial science, the Poisson distribution is commonly used to represent the frequency of insurance claims. It has a memoryless property—that is, the number of claims in any time interval should not affect the number of claims in any other interval. This is a good approximation to what we observe in real data on claim frequency. We selected different mean frequencies (λ's) to model small, large, and in some cases medium books of business. The selected mean frequencies can be seen in Tables 2a and 2b.
2.4. Selection of severity distributions
For casualty products, we used curves developed by ISO actuaries. The majority of ISO's curves are mixed exponential distributions, which can be represented as sums of exponentials \(f(x) = \sum_{i}^{}{w_{i}\frac{1}{\theta_{i}}e^{\frac{x}{\theta_{i}}}}.\) Each exponential distribution in the mixture has a different mean \(\theta_{i}\) and a weight \(w_{i},\) with the weight corresponding to the probability of the individual exponential distribution being chosen.
Table 2a shows the severity distributions we considered for casualty products, along with the means of the claim counts used, and the per occurrence layers we divided the individual losses into. Additional detail may be found in Section 2.6, "Layer descriptions."
To model typical small commercial, middle market, and large commercial books of non-catastrophe property business, we chose distributions derived from ISO data for increasing amount-of-insurance (AOI) ranges. In addition, we simulated distributions based on representative samples of real-life losses.
Table 2a.Distributions used for casualty products
Mean Poisson frequency, λ
Type of Severity distribution
Per occurrence layers
General liability: Premises and operations 100, 500, 1000 Mixed exponential (mean 35K) 250K x 0, 500K x 0, 1M x 0 750K x 250K, 500K x 500K, 4M x 1M
General liability: Products Same as above Mixed exponential (mean 135K) Same as above
Commercial auto Same as above Mixed exponential
(mean 45K) Same as above
Errors and omissions: medium lawyers 50, 500 Lognormal
(mean 250K) 1M x 0, 5M x 0
Directors and officers: public non–Fortune 500 50, 500 Lognormal
(mean 1.3M) 10M x 25M
Table 2b.Distributions used for property, non-catastrophe
Small commercial: AOI 5M to 6M 100, 500 Mixed exponential (mean 95K) 1M x 0, unlimited
Middle market: AOI 25M to 30M Same as above Mixed exponential (mean 175K) Same as above
Large commercial: AOI 100M to 125M Same as above Mixed exponential (mean 285K) Same as above
Non-catastrophe property 1 Same as above Loss sample (mean 100K) Same as above
2.5. Simulation method
We used Latin hypercube sampling (LHS) to sample frequencies from the Poisson distribution and severities from each exponential component of the mixed exponential distributions before applying corresponding weights. We chose LHS over Monte Carlo simulation because it spreads sample points more evenly across all possible values so that samples drawn using LHS are more representative of the real variability in frequency or severity. In particular, since we were interested in studying the tail of the distribution for this study, LHS ensured that our simulation contained a reasonable sampling of high values. We implemented LHS using the randomLHS() function in the lhs package in R.
We used bootstrapping, the base function sample() in R, to choose which exponential component of the mixed exponential distribution to use and to bootstrap losses from the property loss submissions. This function allows the bootstrapping procedure to run very efficiently.
We sampled the losses from the loss submissions without replacement because we believe this provides a better representation of a true "year" of data: the same loss to the same property should not occur in the same year. Sampling without replacement also ensures that we do not obtain too many small losses in each year's sample, and it thus prevents the understatement of aggregate loss.
2.6. Layer descriptions
We divided our simulated individual losses into the per occurrence layers included in Tables 2a and 2b. Using the following calculation we determined the amount of penetration of each simulated loss within a layer:
\[\begin{align} Loss\ in\ Layer = &Min(Max ( (Aggregate\ Loss \\ &- RETENTION), 0 ), LIMIT), \end{align}\]
where \({RETENTION}\) is the lower bound of a layer, possibly equal to zero, and \({LIMIT}\) is the width of the layer. For instance, for the layer $750K excess of $250K, \({RETENTION}\) would be $250,000 and \({LIMIT}\) would be $750,000. For the layer $1M excess of $0, \({RETENTION}\) would be $0 and \({LIMIT}\) would be $1M.
If none of the claims in one simulation penetrated one of the excess layers—i.e., "$4M x $1M" —the aggregate loss for that simulation was zero. This created a mass at zero in our distribution. Thus, the aggregate loss distributions we fitted were of this form:
\[p_{0} + p_{1}*Candidate\ Distribution,\]
where \(p_{0} = \Pr\left\{ Aggregate\ Loss = 0 \right\},\) \(p_{1} \equiv 1 - p_{0}\) and \(Candidate\ Distribution \in \{ Normal,\) \(Logistic,\) \(Gamma,\) \(Inverse\ Gauss,\) \(Lognormal\}\).
2.7. Estimating the number of simulations to run
Many studies (e.g., Papush 1997) use the central limit theorem to estimate the number of simulations necessary to achieve a reasonable degree of accuracy in the estimate of the mean. However, this study concentrates on approximating the tail of the distribution. To the best of our knowledge, there is unfortunately no established method for finding the number of simulations to run when the values of interest relate to the tail of a distribution. To ensure that we achieved a reasonable degree of accuracy of our results, we followed the method described as follows.
We started by running 1,000,000 simulations and calculating the 99th percentile of the resulting aggregate distribution. We then re-ran the same 1,000,000 simulations this time monitoring two metrics as we went through the simulation process. After each increment of 2,500 simulations we measured two quantities: the change in the 99th percentile of the distribution and the difference between the 99th percentile of the distribution from the cumulative set of simulations, compared with the 99th percentile of the distribution obtained as the result of our first run.
Using this approach, we were able to verify the following two assumptions. First, as the mean frequency of the Poisson distribution used in our simulation increases—that is, the size of our book—the accuracy of the 99th percentile will also increase. Second, as the per occurrence limit of the coverage decreases, the accuracy of the 99th percentile will increase.
Due to those two assumptions, we show only results for small-book scenarios with high per occurrence limits. Under 50,000 simulations, we have found that both metrics suggested an error of 1% or less for nearly all scenarios. For instance, we found that for premises and operations with a $1M per occurrence limit and a mean frequency of 100, we had an error of 0.8% after 25,000 simulations, while 50,000 simulations resulted in an error of 0.6%. Figure 1 shows graphs of the two metrics in this scenario (by number of simulations used). It is noteworthy to mention that the error was greater than 3% in only one scenario.
Figure 1.99th percentile simulation error
2.8. Parameter estimation
Parameter estimation was implemented using the function fitdist() in the R package fitdistrplus (Delignette-Muller and Dutang 2015). Initially, we used both maximum likelihood and the method of moments to estimate parameters for approximating distributions. The parameter estimates for the two methods were similar to one another, but the parameters from the method of moments yielded a better-fitting distribution as measured by both the percentile matching and the expected excess value tests. Therefore, we chose to use the method of moments.
2.9. Testing goodness of fit
Once we had simulated the sample of aggregate losses and estimated the parameters for the distributions, we tested the goodness of fit. We created three tests to compare distributions in their tails. These tests are also relevant to reinsurance pricing.
The aggregate features of proportional reinsurance treaties are usually expressed in terms of the loss ratios, which we substituted by calculating percentages of the mean of the simulated aggregate distribution (e.g., 100% of mean, 125% of mean, etc.). In other words, we expressed values \(x\) in the tail of the distributions in terms of the percentages \(p\) of means of the simulated distributions: \(x = p*mean.\)
The percentile matching test compares the survival functions \(Prob\{ X > x\}\) of distributions at various values of the argument until the distributions effectively vanish. This test gives a transparent indication of where two distributions are different and by how much. For various percentages of the mean, we tested how much the survival functions \(Prob\{ X > x\}\) in the fitted distributions differed from those of the simulated sample.
The excess expected loss cost test compares the conditional means of distributions in excess of different amounts. Specifically, it evaluates the conditional expectations \(E\left\lbrack X - x \middle| X > x \right\rbrack*Prob\{ X > x\}\) for different values of \(x.\) These values are important for both the ceding company and the reinsurance carrier when considering aggregate loss ratio caps, stop-loss coverage, annual aggregate deductible coverage, profit commission, sliding-scale commission, and other types of aggregate reinsurance transactions with loss-adjustable features. For instance, in the case of an aggregate loss ratio cap, it is important to accurately price the discount given to the cedent based on the cap. For various percentages of the mean, we test the percentage error in the excess expected loss cost in the fitted distributions as compared to the excess expected loss cost in the simulated sample. We call this "Error in Pricing of Aggregate Stop Loss" in the tables and charts presented in the appendix.
Finally, we estimated the accuracy of the five candidate distributions in pricing loss corridors. In other words, we test the amounts \(E\left\lbrack X\text{^} \middle| x_{2} > X > x_{1} \right\rbrack\) \(*\) \(Prob\left\{ x_{2} > X > x_{1} \right\},\) where \(X\text{^}\) is loss in an aggregate layer, namely \(\max ( \min ( X - x_{1},0 ),\) \(x_{2} - x_{1} ).\) In a loss corridor, the reinsurer returns the responsibility for losses between the two loss ratios \(x_{1}\) and \(x_{2}\) to the primary insurer.
3. Results and conclusion
We illustrate the results of our study in the tables in the appendix. There we show the characteristics of the frequency—denoted as "Size"—and name—denoted as "LoB" (i.e., line of business)—of severity distributions selected in each scenario, the mean aggregate loss in each layer, and the results of the three goodness-of-fit tests. The first column of each table shows the results of each test on the simulated data. In the top portion of each table, the other five columns show the difference in the percentile matching test for the various candidate distributions. The second and third portions in our tables show the error as a percentage of mean aggregate loss for the different distributions in the excess expected loss cost test and error in loss corridor pricing, respectively. The graphs show the results from the first two goodness-of-fit tests. One can clearly see that in every example, gamma (represented by a red line on charts and green background in the tables) shows the smallest error.
Surely, only six examples with measurements at just six points shown in the appendix should not be considered a very convincing argument. That is why we ran our tests for a large sample of reasonable severity distributions (253), for several reasonable layers (13), and for a few reasonable portfolio sizes (3). Among the severity distributions selected for the study were several closed-form ones (ISO's PremOps Table 1, for example) as well as some empirical ones (losses from a large client's submission). In the latter case we use bootstrapping to generate an aggregate loss distribution. We measured results at every 5 percentage points from a minimum of 75% of mean to a maximum of 250% of mean.
We found that the gamma distribution provides a fit that is almost always the best for both ground-up and excess layers.
4. Additional comments
Our choices of potential candidates for the aggregate distribution (see Table 1) were not, in fact, random. We chose the normal distribution as a limiting distribution of sums of identically distributed independent losses. We considered the gamma distribution as a distribution of sums of identical exponentials.[1] We included the lognormal distribution as it is a popular distribution used to approximate the sum of unlimited losses. As we analyzed our choices through the differences in higher moments of the aggregate loss distribution[2] (since the first two moments were matched), we decided to consider also the logistic distribution, whose skewness and excess kurtosis lie between that of the normal and the gamma distributions, as well as the inverse Gaussian distribution, whose skewness and excess kurtosis lie between that of the gamma and the lognormal distributions.
Table 3 demonstrates the behavior of higher moments of the listed distributions expressed in terms of their coefficient of variation, or CV. The table is important as it provides a comparison of the shape of the different theoretical curves with the same first two moments.
Table 3.Higher moments (skewness and excess kurtosis) in terms of CV
Ex. kurtosis
Normal \(c\) 0 0
Logistic \(c\) 0 1.2
Gamma \(c\) \(2c\) \(6c^2\)
Inverse Gauss \(c\) \(3c\) \(15c^2\)
Lognormal \(c\) \(c+c^3\) \(16c^2 + 15c^4 + 6c^6 + c^8\)
Given that we narrowed our consideration to two-parameter distributions, we could only match the first two moments of the empirical distribution by varying parameters of the theoretical one. Consequently, the quality of the approximation depends mainly on how close the higher moments of the theoretical distribution are to the corresponding higher moments of the empirical distribution. Therefore, to decide which theoretical distribution is a better approximation, it is helpful to estimate the ratio of skewness, and, possibly, kurtosis, to the CV.
In the case of Poisson frequency and known severity assumptions it is possible to calculate these ratios exactly.[3] For example, Figure 2 illustrates the behavior of the skewness-to-CV ratio using a GL products severity curve along with a Poisson frequency distribution for different policy limits. The results of the empirical simulation are labeled as CV_agg, and the other three lines are derived from theoretical calculations. As one can see, the ratio of skewness to CV was much closer to 2 than to 3 in the simulation. We could expect the gamma distribution to have a better fit than other distributions because its CV is 2.
Figure 2.Skewness/CV as a function of primary limit
We ran a series of similar tests for a multitude of combinations of appropriate severity curves, layers, and portfolio sizes, 253 x 13 x 3 scenarios overall. In deciding which theoretical distribution provides the best approximation to the simulated, empirical, one, we paid special attention to matching the tail of the distributions.
We were encouraged to see that closeness of higher moments translated to a good fit and that the overwhelming majority of lines of business can be, for all practical purposes, well approximated by a gamma distribution. The same would hold true not only for a series of individual severity curves but also for a mix of several of them. These results lead us to the general conclusion that the gamma distribution provides a uniformly reasonable approximation to the aggregate loss on the interval from the mean to at least two means of the aggregate distribution.
Submitted: March 29, 2018 EDT
Accepted: March 12, 2019 EDT
Bickerstaff, D.R. 1972. "Automobile Collision Deductibles and Repair Cost Groups: The Lognormal Model." Proceedings of the Casualty Actuarial Society 59: 68–102.
Carnell, R. 2016. lhs: Latin Hypercube Samples. R package version 0.14. https://CRAN.R-project.org/package=lhs.
Chaubey, Yogendra P., José Garrido, and Sonia Trudeau. 1998. "On the Computation of Aggregate Claims Distributions: Some New Approximations." Insurance: Mathematics and Economics 23 (3): 215–30. https://doi.org/10.1016/s0167-6687(98)00029-8.
Delignette-Muller, M.L., and C. Dutang. 2015. "fitdistrplus: An R Package for Fitting Distributions." Journal of Statistical Software 64 (4): 1–34. https://www.jstatsoft.org/v64/i04/.
Dropkin, L.B. 1964. "Size of Loss Distributions in Workmen's Compensation Insurance." Proceedings of the Casualty Actuarial Society 51: 198–223.
Dutang, C., V. Goulet, and M. Pigeon. 2008. "actuar: An R Package for Actuarial Science." Journal of Statistical Software 25 (7): 1–37. www.researchgate.net/publication/26539015_actuar_An_R_Package_for_Actuarial_Science.
Hankin, R.K. 2006. "Special Functions in R: Introducing the gsl Package." R News 6 (4): 24–26.
Heckman, P.E., and G.G. Meyers. 1983. "The Calculation of Aggregate Loss Distributions from Claim Severity and Claim Count Distributions." Proceedings of the Casualty Actuarial Society 70 (133 & 134): 22–61.
Meyer, D., E. Dimitriadou, K. Hornik, A. Weingessel, and F. Leisch. 2017. e1071: Misc Functions of the Department of Statistics, Probability Theory Group. R package version 1.6-8. https://CRAN.R-project.org/package=e1071.
Mohamed, M.A., A.M. Razali, and N. Ismail. 2010. "Approximation of Aggregate Losses Using Simulation." Journal of Mathematics and Statistics 6 (3): 233–39. https://doi.org/10.3844/jmssp.2010.233.239.
Nason, G. 2012. NORMT3: Evaluates Complex erf, erfc, Faddeeva, and Density of Sum of Gaussian and Student's t. R package version 1.0-3. https://CRAN.R-project.org/package=NORMT3.
Panjer, Harry H. 1981. "Recursive Evaluation of a Family of Compound Distributions." ASTIN Bulletin 12 (1): 22–26. https://doi.org/10.1017/s0515036100006796.
Papush, D.E. 1997. "A Simulation Approach in Excess Reinsurance Pricing." Insurance Mathematics and Economics 3 (20): 266.
Papush, D.E., G.S. Patrik, and F. Podgaits. 2001. "Approximations of the Aggregate Loss Distribution." Casualty Actuarial Society Forum, Winter, 175–86.
Pentikäinen, T. 1987. "Approximative Evaluation of the Distribution Function of Aggregate Claims." ASTIN Bulletin 17 (1): 15–39. https://doi.org/10.2143/ast.17.1.2014982.
R Core Team. 2014. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Robertson, J. 1992. "The Computation of Aggregate Loss Distributions." Proceedings of the Casualty Actuarial Society 79 (150): 57–133.
Venter, G. 1983. "Transformed Beta and Gamma Distributions and Aggregate Losses." Proceedings of the Casualty Actuarial Society 70 (133 & 134): 289–308.
X is aggregate loss
X^ is loss in an aggregate layer, namely max(min(X-x_1, 0), x_2-x_1)
Indeed, the sum of identical exponentials has the characteristic function \({(1 - \frac{{it}}{\lambda})}^{- n}\) which is \(Gamma(n,\lambda).\)
According to Pentikäinen (1987), as long as the severity distribution is restricted to a limited interval, aggregate loss distributions with several matching moments approximate each other acceptably well.
Indeed, for Poisson distributed number of claims with mean \(\lambda\):
\[E\left\lbrack {Agg} \right\rbrack = \lambda E\left\lbrack {Sev} \right\rbrack,\ \ Var\left\lbrack {Agg} \right\rbrack = \lambda E\left\lbrack {Sev}^{2} \right\rbrack,\ \ CV\left\lbrack {Agg} \right\rbrack = \lambda^{\frac{- 1}{2}}{(E\left\lbrack {Sev}^{2} \right\rbrack)}^{\frac{1}{2}}/E\left\lbrack {Sev} \right\rbrack,\ \ Skew\left\lbrack {Agg} \right\rbrack = \lambda^{\frac{- 1}{2}}E\left\lbrack {Sev}^{3} \right\rbrack/{(E\left\lbrack {Sev}^{2} \right\rbrack)}^{\frac{3}{2}}\]
Powered by Scholastica, the modern academic journal management system | CommonCrawl |
Subgradients of non-convex functions
In these notes (section 2.3), it is stated that:
A point $x^*$ is a minimizer of a function $f$ (not necessarily convex) if and only if $f$ is subdifferentiable at $x^*$ and $0 \in\partial f(x^*).$
Could anybody provide me with references for a proof of the above statement?
Is there a reference where we can learn more about subgradients of non-convex functions? In Section 3. Calculus of subgradients of the above notes,many properties of subgradients are presented for convex functions. I would like to know which ones among these properties still hold for non-convex functions.
optimization convex-optimization nonconvex
KhueKhue
The fact that $x^*$ is a (global!) minimizer of $f$ if and only if $0\in\partial f(x^*)$ is already fully explained in the notes you linked to -- it's really that simple, but here's the argument again for the sake of completeness. Assume that $x^*$ is a global minimizer of $f$. Then, by definition, $$ f(x) - f(x^*) \geq 0 = 0^T (x-x^*) \qquad\text{for any }x\in\mathbb{R}^n, $$ which is exactly the definition of the convex subdifferential (and subdifferentiability). For the other direction, you just swap the last equality around to see that one definition implies the other. (Note that this argument only works for global minimizers -- this is where the convexity of $f$ really comes in, because for convex functions, every minimizer is a global minimizer).
As to the other properties for non-convex functions: The short answer is none. Note that the reverse direction assumes that $f$ is subdifferentiable, i.e., that the set $\partial f(x^*)$ is non-empty. This doesn't sound like much (indeed, it's always true for any (locally finite) convex function), but it pretty much only holds for convex functions. In fact, you can show that if the subdifferential is non-empty everywhere, then $f$ is convex (see https://math.stackexchange.com/q/1499059). So all the other properties -- which are statements about some elements of the subdifferential -- are in general true only in the vacuous sense (as statements about the empty set).
However, there are further generalizations of subdifferentials for non-convex functions, which are non-empty for larger (but still restricted) classes of functions and admit similar properties (in particular, necessary optimality conditions). Needless to say, the larger the class, the trickier they are to work with. One prominent example is Clarke's generalized gradient of locally Lipschitz continuous functions (see, e.g. Chapter 10 Clarke, Functional Analysis, Calculus of Variations and Optimal Control, Springer 2013). You can find even more generalized derivatives in Schirotzek, Nonsmooth analysis, Springer 2007.
Christian ClasonChristian Clason
$\begingroup$ Just came across this answer again. The references are very helpful. Great answer! $\endgroup$ – Khue May 13 '17 at 12:06
In terms of the subdifferential definition used in the notes, the statement is immediate by definition.
For a more general notion of subdifferential, Proposition 2.3.2 on page 38 of Clarke, F. H. (1990), "Optimization and Nonsmooth Analysis" says:
If $f$ attains a local minimum or maximum at $x$, then $0\in \partial f (x).$
$\begingroup$ That's a different subdifferential (although it coincides with the convex subdifferential if $f$ is convex). And you just disproved the opposite direction :) $\endgroup$ – Christian Clason Apr 22 '16 at 19:06
$\begingroup$ I agree that one should not mix different definition of subdifferential -- thanks! I have updated my answer. $\endgroup$ – user3605620 Apr 22 '16 at 19:16
Not the answer you're looking for? Browse other questions tagged optimization convex-optimization nonconvex or ask your own question.
On Boyd et al.'s convergence analysis of ADMM: Why do we need the convexity assumption?
Algorithm for dealing with medium-size non-convex QCQP
Non convex optimization
Minimizing 1D convex functions
non-smooth convex c++ solver
non-convex quadratic with only one quadratic constraint?
Optimization of non-smooth, non-convex, locally Lipschitz functions of type exp(-abs(x)) | CommonCrawl |
Studies of the Equation of State of Asymmetric Nuclear Matter
Lead Research Organisation: University of Liverpool
Department Name: Physics
This grant proposal is to study the equation of state (EOS) of asymmetric nuclear matter. The EOS is a fundamental property of nuclear matter and describes the relationships between the energy, pressure, temperature, density and isospin asymmetry for a nuclear system. It can be divided into a symmetric matter contribution that is independent of the isospin asymmetry and an isospin term (also known as the symmetry energy) that is proportional to the square of the asymmetry. The EOS of asymmetric nuclear matter is also a quantity of crucial significance in understanding the physics of isolated and binary neutron stars, type II supernovae and neutron star mergers. Strong synergies exist between the research programme of this grant proposal and several high priority STFC programmes in astrophysics which address the physics of neutron stars and gravitational waves, including Advanced LIGO/GEO600, LISA and SKA. Measurements of isoscalar collective vibrations, collective flow and kaon production in energetic nucleus-nucleus collisions have constrained the equation of state for symmetric matter for densities ranging from saturation density to five times saturation density. However, the EOS of asymmetric matter has comparatively few experimental constraints. The international ASYEOS collaboration (Europe, the USA and Japan), of which we are leading members, has recently been formed to study the EOS of asymmetric nuclear matter. In the period of this grant proposal, the collaboration intends to exploit the stable and rare isotope beams already available from existing facilities such as GSI, GANIL, MSU and RIBF-RIKEN to study the behaviour of the symmetry energy from sub-saturation densities (0.5-1.0 times normal nuclear matter density) to supra-saturation densities (2.0 times normal nuclear matter density and above). This will pave the way for studies in the future at new facilities such as FAIR, FRIB and EURISOL. The UK physicists will lead the components of the programme at GSI and GANIL. These components are: (a) neutron/proton flow measurements in Sn+Sn reactions at 200-800 AMeV at GSI, and (b) isospin diffusion measurements in Ca+Ca reactions at 35 AMeV at GANIL.
Oct 09 - Oct 13
ST/G008833/1
Marielle Chartier
Nuclear Physics (100%)
Nuclear Astrophysics (50%)
Relativistic Heavy Ions (50%)
University of Liverpool, United Kingdom (Lead Research Organisation)
Facility for Antiproton and Ion Research (Collaboration)
Large National Heavy Ion Accelerator (Collaboration)
INDRA Collaboration (Collaboration)
Michigan State University, United States (Collaboration)
European Organization for Nuclear Research (CERN) (Collaboration)
RIKEN, Japan (Collaboration)
Helmholtz Association of German Research Centres (Collaboration)
Marielle Chartier (Principal Investigator)
|< < 1 2 3 4 5 6 7 > >|
Abelev B (2014) J/? production and nuclear effects in p-Pb collisions at $ \sqrt{ {{ {\mathrm{s}}_{\mathrm{NN}}}}} $ = 5.02 TeV in Journal of High Energy Physics
Abelev B (2014) Multiplicity dependence of pion, kaon, proton and lambda production in p-Pb collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml in Physics Letters B
Abelev B (2014) Beauty production in pp collisions at s = 2.76 TeV measured via semi-electronic decays in Physics Letters B
Abelev B (2014) Measurement of prompt D-meson production in p-Pb collisions at v(s(NN))=5.02 TeV. in Physical review letters
Abelev B (2014) Event-by-event mean $$\varvec{p}_{\mathbf {T}}$$ p T fluctuations in pp and Pb-Pb collisions at the LHC in The European Physical Journal C
Abelev B (2014) Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.e in Physics Letters B
Abelev B (2014) Centrality, rapidity and transverse momentum dependence of <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/ in Physics Letters B
Abelev B (2014) Suppression of ?(2S) production in p-Pb collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV in Journal of High Energy Physics
Abelev B (2014) Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at s NN = 2.76 TeV in Physics Letters B
Figueredo M (2014) Measurement of open-charm hadrons with ALICE in Journal of Physics: Conference Series
Description - New constraints on the nuclear equation of state of asymmetric matter at twice the normal nuclear matter density with Au+Au heavy-ion collisions at GSI (ASY-EOS experiment).
- Signal extraction for the open charm baryon Lambda-c in p-p collisions at the LHC with the ALICE experiment, to study the properties of the Quark-Gluon Plasma.
- Contributed to the development of the new SPiRIT TPC at RIKEN.
- New results on the nuclear equation of state of asymmetric nuclear matter at sub-saturation densities with Ca+Ca heavy-ion collisions at GANIL (INDRA+VAMOS experiment).
Exploitation Route New results on the nuclear equation of state are relevant to astrophysics, e.g. supernovae, neutron stars.
Sectors Education
Description ALICE Collaboration
Organisation European Organization for Nuclear Research (CERN)
Department ALICE Collaboration
Country Switzerland
PI Contribution Data analysis of LHC data from Run1 and Run2 (heavy-flavour physics working group). ITS upgrade project: Monte Carlo simulations, construction of modules and staves for the Outer Barrel. Supervision of UG and PhD student projects. Meetings of ALICE-UK research groups (Univ. of Birmingham, Univ. of Liverpool, STFC Daresbury). Presentations at conferences, meetings and workshops.
Collaborator Contribution Access to beam time, data, GRID and other CERN infrastructure and resources, ALICE collaboration international network etc.
Impact Publications. Training of UG and PhD students and research staff. Invitations to speak at meetings, workshops, conferences.
Description ASY-EOS Collaboration
Organisation Helmholtz Association of German Research Centres
Department GSI Helmholtz Centre for Heavy Ion Research
PI Contribution Collaborative research/experiments. Data analysis and monte carlo simulations (supervision of PhD students and PDRA), scientific input (experimental proposals, authorship of publications...). Manpower (research staff, PhD students) for running experiments. Presentations at collaboration meetings, workshops, conferences.
Collaborator Contribution Access to research large-scale facility and beam time, instrumentation for experiments, technical support, PhD students and research staff, etc.
Impact Publications. PhD theses (P. Wu, S. Gannon in progress). Training of PhD students and research staff. Invitations to speak at meetings, workshops, conferences.
Description INDRA-VAMOS Collaboration
Organisation INDRA Collaboration
PI Contribution Collaborative research/experiments. Data analysis, monte carlo simulations and theoretical modelling (supervision of PhD students), scientific input (experimental proposals, authorship of publications...). Manpower (research staff, PhD students) for running experiments. Presentations at collaboration meetings, workshops, conferences.
Impact Publications. PhD thesis (P. Wigg in progress). Training of PhD students and research staff. Invitations to speak at meetings, workshops, conferences.
Organisation Large National Heavy Ion Accelerator
Description R3B Collaboration (NUSTAR)
Organisation Facility for Antiproton and Ion Research
Department Nuclear Structure, Astrophysics and Reactions
PI Contribution Collaborative research/experiments. Leadership in design and construction of detection systems (e.g. Si Tracker and associated EDAQ). Data analysis and monte carlo simulations (supervision of PhD students), scientific input (experimental proposals, authorship of publications...). Manpower (technical and research staff, PhD students) for construction of equipment and running experiments.. Presentations at collaboration meetings, workshops, conferences.
Collaborator Contribution Access to research large-scale facility and beam time, instrumentation for experiments, technical support, PhD students and research staff, etc. T. Aumann spokesperson of R3B collaboration.
Impact Publications. PhD theses (S. Paschalis, J. Taylor). Training of PhD students and research staff. Invitations to speak at meetings, workshops, conferences. Project leadership of Si tracker (NUSTAR-UK project grant).
Description SAMURAI Collaboration
Organisation Michigan State University
Department National Superconducting Cyclotron Laboratory
PI Contribution Collaborative research/experiments. Data analysis and monte carlo simulations (supervision of PhD student), scientific input (experimental proposals, authorship of publications...). Manpower (research staff, PhD student) for construction of TPC and running experiments. Presentations at collaboration meetings, workshops, conferences.
Impact Publications. PhD thesis in progress (W. Powell). Training of PhD student and research staff. Invitations to speak at meetings, workshops, conferences. Funding from RIKEN-Univ Liverpool agreement for PhD studentship.
Organisation RIKEN
Department RIKEN-Nishina Center for Accelerator-Based Science
Description IOP School Grants Scheme
Part Of Official Scheme? Yes
Type Of Presentation Workshop Facilitator
Results and Impact Funding awarded to Frodsham CE Primary school to visit Jodrell Bank Observatory.
I was the scientific partner for this grant who initiated the idea, helped formulate the funding request and took part in the visit with year 4 children.
Difficult to quantify.
Hopefully help broaden the provision of the science curriculum in this primary school and develop taste for physics/astrophysics in young children.
Description Nuclear Physics Masterclasses
Results and Impact About 20 six-formers attend practical activities in the new Canberra Laboratory at the Univ. of Liverpool award-winning CTL facilities and presentations over a number of days, including discussions in the Q&A part of the presentations.
I was the academic lead scientist for these masterclasses since 2012.
Hopefully help attract students to study Physics at University and develop awareness of Nuclear Physics impact on everyday life.
Year(s) Of Engagement Activity 2012,2013,2014
Description Women in Physics Workshop
Type Of Presentation Keynote/Invited Speaker
Results and Impact Invited Talk at Women in Physics (WiP) workshops, organised by the Physics Outreach Group of the University of Liverpool for girls in year 12 of high school, taking AS and/or A2 courses in Physics. | CommonCrawl |
Zhuokun Pan1,2,
Yueming Hu1,2 &
Bin Cao3
Research in time-series remote sensing data is receiving increasing attention. With the availability of relatively short repeat cycle and high spatial resolution satellite data, the construction and application of high spatiotemporal remote sensing time-series data is promising. In this paper, we proposed a method to construct complete spatial time series data, with Savitzky-Golay filter for smoothing and locally-adaptive linear interpolation for generating daily NDVI imagery. An IDL-based program was developed to achieve this goal. The China's HJ-1 A/B satellite data were employed for this remote sensing time series construction. The results demonstrated that: (1) This method can generate smooth continuous time series image data successfully based on irregularly short-revisit remote sensing data; (2) HJ-1 A/B NDVI time-series were demonstrated to be successful in monitoring crop phenology and hyperspectral analysis was successfully applied on HJ-1 A/B time-series data to perform temporal endmember extraction. The IDL-based time-series construction program is generalizable for various kind of multi-temporal remote sensing data such as MODIS vegetation-index product. Discussion and concluding remarks are made to reveal the authors' perspective on higher spatial resolution time-series analysis in the remote sensing community.
With the launch of high frequent remote sensing satellites and availability of data, time-series data derived from multi-temporal remote sensing images are receiving significant attention concerning the dynamics of regional vegetation growth, phenological crop identification, land use change detection, etc. [1,2,3,4]. Specifically, vegetation indices (VI) products as time-series data have been widely employed in the remote sensing community. These data help us to understand the earth system and land-surface dynamics [4, 5]. However, most of VI time-series data was derived from low spatial resolution satellite platforms such as NOAA-AVHRR (Advanced Very High-Resolution Radiometer) instruments; EOS-MODIS (Moderate Resolution Imaging Spectro radiometer); and SPOT (Système Pour l'Observation de la Terre) VGT product [6,7,8,9,10]. Higher spatiotemporal resolution images can deepen the understanding of land surface dynamics; generating consistent and comparative finer spatiotemporal time series imagery is therefore critical [11].
Several researchers have started to develop methods to increase the spatial resolutions to solve the trade-off between temporal and spatial resolution [12,13,14,15]. The term "data fusion" has been proposed to take advantage of scale and repeat cycle, examples just as the use of one scene of Landsat TM imagery to predict another time based on its relationship with high frequent MODIS imagery [16]. However, the performance of data fusion depends heavily on the sensitivity to both spatial heterogeneity and spectral inconsistency, which meant not all area are applicable [17]. Data fusion still rely on the availability of actual satellite images, and the quality of ingested remote sensing data. Even though it can be used to make synthetic images from multiple sources, these fused images cannot replace actual images [16].
With advancing technology for launching satellite constellations, multiple remote sensing satellites can be brought into orbit at low cost; with both radiometric and spatial consistency, these satellites bring new perspectives for earth observation [4, 15, 18, 19]. One successful mission is Planet Labs' Remote Sensing Satellite System [20]. These small satellites provide high frequent revisiting cycle, which meets the requirement of daily observation.
Differed to idea of multi-source data fusion, the perspective of enhancing spatiotemporal resolution may be addressed by newly launched remote sensing satellites with real high spatial and temporal resolution data [15]. On the other hand, previous work done by Pan et al., they employ mono-source remote sensing data, a two-day-repeat HJ-1 A/B data, and developed daily time series construction method [3]. Similarly, Sun et al. redeveloped the TIMESAT program by modifying the adaptive smoothing and together with daily interpolation, that was aiming to generate daily 30 m Landsat time series [21].
Remote sensing time series data are commonly used in phenology monitoring. To facilitate the processing and analysis of time series, relevant researchers might have encountered with two computer programs: TIMESAT or SPIRITS [3, 18, 21,22,23]. Conventionally, MODIS data has been widely implemented in monitoring vegetation dynamics based on these two programs. However, these programs were designed to handle MODIS or SPOT-VGT VI products that were organized at a fixed-day interval (e.g., 8 or 10 days composite). To date, higher resolution remote sensing satellites (e.g., Sentinel, RapidEye, Planet Labs) have become operational. However, few relevant studies were founded had adopted these short repeat-cycle and high spatial resolution data for time series analysis as the same way with MODIS. To construct a high spatiotemporal time series using these remote sensing data and make it applicable for TIMESAT or SPIRITS, specifically in phenology detection, one major problem is ensuring the continuousness and completeness of the time series dataset [3].
As such, up-to-date remote sensing satellite constellations provide sufficient images, which leads to the potential of constructing high spatiotemporal resolution time-series data. The motivation of writing this paper was to address an important topic by description of methodology together with a computer program that meets the common interest of facilitating time-series remote sensing data, specifically for up-to-date high spatial resolution satellite data.
China's HJ-1 A/B satellites data were employed for testing in this paper. Launched in 2008, HJ-1 A/B constellations are a new generation of small Chinese civilian remote sensing satellites [24]. The HJ-1 A/B satellites have two optical sensors to perform earth observation at 30-m resolution, with four bands covering the visible and near-infrared wavelength range. The double constellations constitute an observation network that covers China and its surrounding areas with a two-day repeat cycle. By taking advantage of its relatively high spatial resolution and frequent repeat cycle, some researchers demonstrated its potential in constructing dense time-series data by data fusion with MODIS [11, 15].
The main task of this study is to make use of HJ-1 A/B for NDVI (normalized difference vegetation index) time-series data construction. Since the two-day-repeat HJ-1 A/B satellites provide a considerable amount of images for time-series construction in our test site, we downloaded year-round HJ-1 A/B CCD images in 2012 for testing. There were 73 scenes available in 2012, and the images were cloud-free in the research area (Fig. 1 (a)). The site selected for testing HJ-1 A/B satellite data was Yangling, located in Guanzhong Plain of Shaanxi Province, China. This area has a double cropping system: the winter wheat is sowed in October and harvested in June; the summer corn is sowed in June and harvested in October. Production of these HJ-1 A/B NDVI are conventional as others remote sensing imagery, and the literature has summarized the procedure [3, 11, 15]. The NDVI images were layer stacked to construct a time-series dataset (Fig. 1 (b) and (c)).
Data preparation: (a) acquisition of HJ-1 A/B CCD images; (b) collection of subset images in the test site and coregistration; (c) calculation of NDVI and construction of the time-series dataset
Satellite images captured by optical remote sensors usually contain noise due to weather conditions and changing solar illumination throughout the year [25, 26]. When performing time-series smoothing, researchers are also reminded that maintenance of the original characteristics of the time-series profile is critical [25]. Just as with other high spatial resolution images, the trade-off is that HJ-1 A/B time-series data is spaced irregularly in time. Signal processing techniques (e.g., wavelet and Fourier analysis) require the time-series data have regular, equidistant spacing [27, 28], hence they may not work well in the nonequidistantly spaced time-series data derived from HJ-1 A/B satellites.
Moreover, remote sensing VI products such as MODIS and SPOT-VGT are organized in fixed-day intervals (e.g., 8 or 10 days composite). The processing methods available in time-series software (e.g., TIMESAT) do not function for unevenly distributed time-series; on the other hand, the function fitting or Fourier-based filters may be problematic when applied to irregular VI time-series [29]. Therefore, in this section, we improved our method by first introduced the Savitzky-Golay (S-G) smoothing method. Then other smoothing methods were tested for comparison to prove its superiority. Finally, a missing-data interpolation was proposed to ensure regular spacing on a daily basis and generate daily NDVI images. IDL (Interactive Data Language) programming helps to achieve these goals.
S-G smoothing method
Time-series smoothing must be done to retrieve the essential shape of a curve. In this paper, the Savitzky-Golay (S-G) smoothing method was employed to facilitate the irregular spacing in HJ-1 A/B NDVI time-series. The S-G filter, also known as the least squares or digital smoothing polynomial, can be used to smooth a noisy signal [30]. The algorithm description can be described as below:
$$ {g}_i=\sum_{n=- nL}^{nR}{c}_n{f}_{i+n}/n $$
where f i represents the original data value in the time-series, and g i is the smoothed value, which is the linear combination of c n and f i . Here, n is the width of the moving window to perform filtering, and nL and nR correspond to the left and right edge of the signal component. If c n is a constant defined as c n = 1/(nL + nR + 1), then the S-G filtering becomes a moving average smoothing. The idea of S-G filtering is to find filtering coefficients c n that preserve higher moments. Therefore, in Eq.(2), c n is not a constant but a polynomial fitting function, typically quadratic or quartic. Then a least-squares fit is solved, ranging from nL to nR to obtain c n . For a specific dataset of a time series in a moving window, we define the fitting function as a quadratic polynomial for fitting a specific range of f i :
$$ {c}_n(t)={c}_1+{c}_2t+{c}_3{t}^2 $$
where t corresponds to the day of the year in NDVI time series. Therefore, the smoothed value g i can be obtained via Eq.(1). The result of S-G smoothing methods is shown in Fig. 2.
Smoothing time-series data by S-G method
Comparison between S-G and other methods
Several commonly used methods for smoothing were tested as follows. The results could firmly demonstrate the S-G filter's superiority.
Global function fitting
The SPLINE-curve fitting is a global function fitting to smooth discrete data. By forming a polynomial equation, a smoothed function curve is obtained to represent the discrete data. Likewise, other function fitting methods, such as using the asymmetric Gaussian model, have been adopted for fitting AVHRR-NDVI time-series data [1]. In this study, we applied a SPLINE-curve fitting, \( \kern.8em y={ax}^3+{bx}^2+ cx+d \), and a Gaussian function fitting,\( y={\mathrm{a}}^{\ast }{e}^{\left(-\frac{{\left(\mathrm{x}-\mathrm{b}\right)}^2}{c}\right)} \), on HJ-1 A/B time series data for smoothing (Fig. 3). Figure 3 (a) appears to fit well but does not maintain the essential shape of the time-series trajectory; in Fig. 3 (b), the Gaussian function did not perform well in fitting a double cropping area which had two growth cycles over time. Apparently, as demonstrated, global function fitting methods are not suitable for unevenly spaced time-series data.
Smoothing time-series data by function fitting with (a) a polynomial function and (b) a Gauss function
Signal denoising
Viewing time series data as a signal, fast Fourier transformation (FFT) and a wavelet transform (WT) were adopted to handle the HJ-1 A/B time-series data. FFT and WT have already been applied on MODIS VI time series to retrieve a smoothed trajectory vegetation growth cycle [8]. We programmed the IDL-based fast Fourier transformation and the wavelet transform function to apply to HJ-1 A/B time series data. However, as mentioned before, signal denoising (for FFT and WT) did not perform well for unevenly spaced time-series data (Fig. 4); no matter how the denoising parameters were set, such methods neither maintained an original shape nor preserved the original date in the time series.
Smoothing time-series data by signal denoising: (a) wavelet transform; (b) Fourier transform
HANTS method
HANTS (Harmonic Analysis of NDVI Time-Series) is a commonly used tool to smooth time-series remote sensing data (http://www.un-spider.org/links-andresources/gis-rs-software/hants%C2%A0harmonic%C2%A0analysis-of%C2%A0time%C2%A0series-nlrgdsc). HANTS can be used to remove cloud effects, smooth the data set, interpolate the missing data, and compress the data. Although the HANTS method could generate a pleasing looking time series, it has the same problem as FFT and WT when using the signal denoising method. For unevenly spaced HJ-1 A/B time-series data, HANTS tended to maintain the spatial completeness of a pixel profile (just as cloud removal), but to scarify the temporal characteristics. Moreover, the HANTS method did not preserve the original date in the time series; temporal characteristics revealing critical phenology details were drowned (Fig. 5).
Smoothing time-series data by HANTS method
Generating daily NDVI images
S-G filtering was employed to smooth HJ-1 A/B NDVI time-series data, to ensure a continuous and complete time-series dataset. A feasible approach was then proposed to ensure regular spacing on a daily basis. Linear interpolation is a simple interpolation method commonly used in mathematics and computer science. This paper developed a locally adaptive linear interpolation to generate missing data throughout the NDVI time series. Missing data between two images can be generated by Eq.(3)–(4):
$$ \frac{NDVI-{NDVI}_0}{DOY-{DOY}_0}=\frac{NDVI_1-{NDVI}_0}{DOY_1-{DOY}_0} $$
$$ NDVI={NDVI}_0+{\left({NDVI}_1-{NDVI}_0\right)}^{\ast}\frac{DOY-{DOY}_0}{DOY_1-{DOY}_0} $$
where NDVI represents the missing day to be interpolated, and NDVI 1 and NDVI 0 represent the valid images used for the interpolation. Therefore, the NDVI between NDVI 0 and NDVI 1 can be treated as a linear relationship and then generated according to Eq. (4).
The smoothing performance and interpolation accuracy were evaluated by 1:1-line comparison as described in Fig. 6, both presented a goodness of fit.
Performance of time-series construction (a pixel sample from double-cropping land)
Most commonly-used remote sensing software (e.g., ERDAS, ENVI) does not provide functionalities to manipulate time-series data; at present, TIMESAT and SPIRITS are also not designed to facilitate such relatively high spatial time series data. Additionally, an executable processing framework has not been found that allows researchers to obtain high spatiotemporal time-series dataset that meets their research demands. Although the methodology described above was easily achieved in operating a one-dimensional array by most programming platforms, the question raised here was how to perform smoothing and interpolation for time-series data with a three-dimensional array. IDL is an array-oriented language with numerous mathematical analysis and graphical display techniques, is ideal programming language for image data analysis, visualization, and cross-platform application development (http://www.harrisgeospatial.com/ProductsandTechnology/Software/ENVI.aspx). Anyone who working with imagery or raster data has probably encountered ENVI software; its library routines are IDL-based functions and procedures. IDL programming is based on the ENVI function that is capable of operating remote sensing images; thus we developed a program to achieve filtering and interpolation of three-dimension time-series data.
As introduced in S-G smoothing method section and Generating daily NDVI images section, this program mainly consists of two steps: time series filtering and image interpolation. This IDL-based program was developed with the IDL functions library, using Savitzky-Golay filtering and interpolation. The coding of this program was designed to treat time-series datasets as a three-dimension array by calling ENVI functions to manipulate individual images; the program loops pixel by pixel to extract the time-dimension to perform filtering and interpolation.
As described in section S-G smoothing method, the S-G filter will tend to minimize overall noise in NDVI time-series to preserve the original trajectory. This IDL program requires users define the width of the moving window and degree of polynomial fitting in S-G filtering. The interpolation in the IDL program includes three commonly used methods: (1) simple linear interpolation, as described in Eq. (3)–(4); (2) least squares quadratic fit for each four-point neighborhood (x[i-1], x[i], x[i + 1], x[i + 2]) surrounding the interval; and (3) SPLINE fit, which is a polynomial function fitting function for the four surrounding points. A user can use different interpolation methods to achieve optimal effect. The overall schematic of the functionality in this program is described in Fig. 7. Since the interpolation was applied locally in the time series within a defined interval, the result suggest that the essential shape of the NDVI trajectory was well maintained.
Schematic of the smoothing and interpolation for NDVI images
Because our method can handle equidistantly spaced high spatiotemporal remote sensing images to construct time series data, this paper provided further testing and potential applications.
Extracting phenology
Remote sensing data is particularly useful for detecting regional crop phenological characteristics [8]. HJ-1 A/B has a constellation of two satellites, which allows a two-day observation cycle; by the proposed methodology and the developed computer program, HJ-1 A/B data can be used to construct a complete time-series dataset, making it possible to obtain key crop growth stages. Since the complete growth cycle of vegetation has been established with a daily interval, this study employed the TIMESAT program to extract the start and end of crop seasons from NDVI time series (Fig. 8).
Season start and end of double-cropping area in NDVI time series
Since phenology dynamic in remote sensing indicates the actual crop growth process in a pixel basis, these dynamic processes correspond directly to actual, ground-based phenological events, which provide indicators of climate vibrations; in addition, fine-scale crop seasonality reflects a spatial arrangement of agricultural activities. This study presented mapping of the start and end of crop seasons (measured in day-of-year) for our test site in 2012 (Fig. 9). The results suggest that time-series data derived from HJ-1 A/B satellites were applicable for extracting crop phenology, and the value distribution of phenological date (measured in day-of-year) was robust and convincing.
Season start and end extraction for wheat-corn double-cropping land
Spectral analysis with spatial time-series data
Time-series remote sensing data can be regarding as combinations of temporal endmembers in a temporal feature space where the dimensions represent different components of the time domain processes [31]. Hence, the time-series remote sensing data could be regarded as a hyperspectral-like dataset then to perform temporal endmembers extraction technique. Endmember extraction is the process of selecting a collection of pure signature spectra of the materials present in a hyperspectral image scene [32]. By analogy to spectral mixture analysis in spectral feature spaces, the temporal feature space conveys the spatiotemporal characteristics of ground substances. This study implemented the sequential maximum angle convex cone (SMACC) [33] spectral tool to extract temporal endmembers in NDVI time-series data.
As shown in Fig. 10, we extracted temporal endmembers representing several typical ground substances. The temporal trajectory in crop phenology is a good indicator for distinguishing cropping area and others land cover type. Additionally, as different types of vegetation have different phenologies, the cropping types are more easily discriminated. Aside from that, the time-domain phenomenon of VI time series reflects the process of crop growth and management level at different geographic locations; such information was quite important for agricultural productivity assessment.
Endmembers extraction with NDVI time-series data
Test with MODIS EVI data
Since ENVI/IDL-based programming provides convenient interactions for manipulating remote sensing images, our method is generalizable for multi-temporal remote sensing data to construct a smooth time-series dataset with daily interpolation. For test purposes, 46 scenes of MODIS EVI images (8-day composite, 500 m resolution) were obtained within 1 year to construct a complete time series. The smoothing and interpolation result shown in Fig. 11 suggests that this program was capable of retrieving the essential trajectory of vegetation growth in the time dimension.
A test with MODIS EVI product
Attributing to an explosive growth of data, research in time-series remote sensing data is receiving more and more attention, therefore, this study was conducted towards building higher spatial resolution, just as researchers do with the VI product derived from AVHRR, MODIS and SPOT-VGT. China's HJ-1 A/B remote sensing data has been successfully employed for constructing complete and applicable time-series data at 30-m resolution. However, very few research article reported using such relatively high spatiotemporal remote sensing data to construct a time-series dataset, and no processing software/tool is available for that purpose. Therefore, in addressing this important topic, a new method together with an IDL-based program for smoothing and interpolating time-series data was developed in this paper. Some remarks are proposed below.
When considering a high spatiotemporal time series, users should be acquainted with data availability in the study area and the expenses. In this paper, open-access HJ-1 A/B data were selected as cloud free, because the cloud-free NDVI data are relatively less affected by noise; the S-G filtering method works well in maintaining the essential shape of the NDVI trajectory, especially in accurately extracting vegetation phenology.
Other smoothing methods, like HANTS, are well-known for time-series smoothing; however, HANTS tends to ensure spatial completeness but sacrifices important information in the time dimension [34]; meanwhile such drowned information is important in delineating vegetation growth. Likewise, alternative function fitting (sigmoid curve such as logistic model) or Fourier-based denoising were not suitable for implementation because their strict shape-matching was problematic in handling the unevenly spaced time-series data derived from HJ-1 A/B satellites. Using those methods may exaggerate high fluctuations in time-series data, making the result inconvincible. A comparison of smoothing methods was tested in section Comparison between S-G and other methods. Particularly, when time series data is constructed for phenology detection, a smoothing method is to be cautiously considered [25]. Methodologies employed in this paper for smoothing and interpolating were not aimed at producing the most visually pleasing result, but the most accurate. More comments and discussions can be referred to in the literature [25, 29, 35, 36].
Several articles indicate the Sentinel-2 A/B data is promising [11, 18]. With their short-term global revisiting cycle and 10-m resolution, they will play an additional role in high spatiotemporal remote sensing of time series data. In addition, current satellite constellations for commercial purposes, such as RapidEye and Planet Labs, should be considered for building a time series, probably the underlying value within time series analysis is truly need for public.
Conclusions and perspectives
The motivation of this paper was to provide a new method and a computer program that facilitates the construction of time-series remote sensing data with generalizable, potential, and practical applications, specifically for up-to-date high spatial resolution satellite data. To achieve this goal, this study presented comprehensive processing procedures to construct HJ-1 A/B NDVI time series: the Savitzky-Golay smoothing method was employed first to reduce noise components from the original curve to retrieve the original shape of the time-series profile; then, a locally adaptive linear interpolation was employed to generate daily NDVI based on the available imageries. Afterward, the IDL-based program was developed to fulfill these procedures.
Our method was able to produce high-quality NDVI time series and might advance its application in various fields of study. As application cases introduced in Potential applications section, firstly application of HJ-1 A/B NDVI time-series data in a fine-scale phenology characterization was presented; secondly, we extracted typical endmembers from the time-series data that represented spatiotemporal characteristics of ground substances, indicating the potential of using hyperspectral analysis techniques with time-series remote sensing data. Finally, the MODIS EVI data was used for testing; the result suggested that this program is generalizable for most time-series remote sensing data. As a primary test, based on the result above, we believed that higher resolution time series remote sensing data would make it much valuable than conventional MODIS or SPOT-VGT, making civilian acceptable and accessible to such techniques in real life. Since high-frequency remote sensing data will no longer be restrained to the medium and low resolution domains, the importance of time-series remote sensing data had been recognized, and higher spatial resolution will add on to these. Researchers should be encouraged to advance its applications in other disciplines. Constructing high spatiotemporal time series data requires a considerable amount of multi-temporal images. Researchers may be concerned about the cost, data acquisition and preprocessing. Currently, programs and tools for satellite image processing are facing technical challenges with upcoming satellite sensors with increasing spatial and temporal resolution. There is an urgent need to develop standards for data processing of flow and analysis systems, which would allow fast data processing to facilitate higher-magnitude time series data [3, 18].
https://github.com/panzhuokun/Time-series-remote-sensing-construction-IDL-code-.git
Jönsson P, Eklundh L. Seasonality extraction by function fitting to time-series of satellite sensor sata. IEEE Trans. Geosci. Remote Sens. 2002;40(8):1824–32.
Fensholt R, Proud SR. Evaluation of Earth Observation based global long term vegetation trends — Comparing GIMMS and MODIS global NDVI time series. Remote Sens Environ. 2012;119:131–47. doi:10.1016/j.rse.2011.12.015.
Pan Z, Huang J, Zhou Q, Wang L, Cheng Y, Zhang H, et al. Mapping crop phenology using NDVI time-series derived from HJ-1 A/B data. Int J Appl Earth Obs Geoinf. 2015;34:188–97. doi:10.1016/j.jag.2014.08.011.
Künzer C, Stefan D, Wolfgang W. Remote Sensing Time Series-Revealing Land Surface Dynamics. Springer. 2015
Guyet T, Nicolas H. Long term analysis of time series of satellite images. Pattern Recogn Lett. 2016;70:17–23. doi:10.1016/j.patrec.2015.11.005.
Jakubauskas ME, Legates DR, Kastens JH. Crop identification using harmonic analysis of time-series AVHRR NDVI data. Comput Electron Agric. 2002;37:127–39.
Reed BC, Brown JF, VanderZee D, Loveland TR, Merchant JW, Ohlen DO. Measuring phenological variability from satellite imagery. J Veg Sci. 1994;5:703–14.
Sakamoto T, Yokozawa M, Toritani H, Shibayama M, Ishitsuka N, Ohno H. A crop phenology detection method using time-series MODIS data. Remote Sens Environ. 2005;96(3–4):366–74. doi:10.1016/j.rse.2005.03.008.
Verbeiren S, Eerens H, Piccard I, Bauwens I, Van Orshoven J. Sub-pixel classification of SPOT-VEGETATION time series for the assessment of regional crop areas in Belgium. Int J Appl Earth Obs Geoinf. 2008;10(4):486–97. doi:10.1016/j.jag.2006.12.003.
Zhang X, Friedla MA, Schaaf CB, Strahler AH, Hodges JCF, Gao F, et al. Monitoring vegetation phenology using MODIS. Remote Sens Environ. 2003;84:471–5.
Bian J, Li A, Wang Q, Huang C. Development of Dense Time Series 30-m Image Products from the Chinese HJ-1A/B Constellation: A Case Study in Zoige Plateau, China. Remote Sens. 2015;7(12):16647–71. doi:10.3390/rs71215846.
Gao F, Masek J, Schwaller M, Hall F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006;44:2207–18.
Schmidt M, Udelhoven T, Gill T, Röder A. Long term data fusion for a dense time series analysis with MODIS and Landsat imagery in an Australian Savanna. J. Appl. Remote. Sens. 2012;6(1):063512.
Wu M, Niu Z, Wang C, Wu C, Wang L. Use of MODIS and Landsat time series data to generate high-resolution temporal synthetic Landsat data using a spatial and temporal reflectance fusion model. J Appl Remote Sens. 2012;6(1):063507.
Wu M, Zhang X, Huang W, Niu Z, Wang C, Li W, et al. Reconstruction of Daily 30 m Data from HJ CCD, GF-1 WFV, Landsat, and MODIS Data for Crop Monitoring. Remote Sens. 2015;7(12):16293–314. doi:10.3390/rs71215826.
Gao F, Hilker T, Zhu X, Anderson M, Masek J, Wang P, et al. Fusing Landsat and MODIS Data for Vegetation Monitoring. IEEE Geoscience and Remote Sensing Magazine. 2015;3(3):47–60.
Kong F, Li X, Wang H, Xie D, Li X, Bai Y. Land Cover Classification Based on Fused Data from GF-1 and MODIS NDVI Time Series. Remote Sens. 2016;8(9):741. doi:10.3390/rs8090741.
Rembold F, Meroni M, Urbano F, Royer A, Atzberger C, Lemoine G, et al. Remote sensing time series analysis for crop monitoring with the SPIRITS software: new functionalities and use examples. Front Environ Sci 2015;3. doi:10.3389/fenvs.2015.00046.
Sandau R, Brieß K, D'Errico M. Small satellites for global coverage: Potential and limits. ISPRS J Photogramm Remote Sens. 2010;65(6):492–504. doi:10.1016/j.isprsjprs.2010.09.003.
Marshall W, Boshuizen C. Planet Labs' Remote Sensing Satellite System. Proceedings of the AIAA/USU Conference on Small Satellites. 2013.
Sun L, Gao F, Anderson M, Kustas W, Alsina M, Sanchez L, et al. Daily Mapping of 30 m LAI and NDVI for Grape Yield Prediction in California Vineyards. Remote Sens. 2017;9(4):317. doi:10.3390/rs9040317.
Eerens H, Haesen D, Rembold F, Urbano F, Tote C, Bydekerke L. Image time series processing for agriculture monitoring. Environ Model Softw. 2014;53:154–62. doi:10.1016/j.envsoft.2013.10.021.
Jönsson P, Eklundh L. TIMESAT—a program for analyzing time-series of satellite sensor data. Comput Geosci. 2004;30(8):833–45. doi:10.1016/j.cageo.2004.05.006.
Wang Q, Wu C, Li Q, Li J. Chinese HJ-1A/B satellites and data characteristics. Science China (Earth Sciences edition). 2011;53(51):51–7. doi:10.1007/s11430-010-4139-0.
Hird JN, McDermid GJ. Noise reduction of NDVI time series: An empirical comparison of selected techniques. Remote Sens Environ. 2009;113(1):248–58. doi:10.1016/j.rse.2008.09.003.
Sakamoto T, Wardlow BD, Gitelson AA, Verma SB, Suyker AE, Arkebauer TJ. A Two-Step Filtering approach for detecting maize and soybean phenology with time-series MODIS data. Remote Sens Environ. 2010;114(10):2146–59. doi:10.1016/j.rse.2010.04.019.
Baisch S, Bokelmann Gt HR. Spectral analysis with incomplete time series: an example from seismology. Comput Geosci. 1999;25:739-50.
Schulz M, Stattegger K. Spectrum: spectral analysis of unevenly spaced paleoclimatic time series. Comput Geosci. 1997;9(23):929–45.
Cong N, Piao S, Chen A, Wang X, Lin X, Chen S, et al. Spring vegetation green-up date in China inferred from SPOT NDVI data: A multiple model analysis. Agric For Meteorol. 2012;165:104–13. doi:10.1016/j.agrformet.2012.06.009.
Savitzky A, Golay MJE. Smoothing and differentiation of data by simplified least Squares procedures. Anal Chem. 1964;36(8):1627–39.
Small C. Spatiotemporal dimensionality and Time-Space characterization of multitemporal imagery. Remote Sens Environ. 2012;124:793–809. doi:10.1016/j.rse.2012.05.031.
Plaza A, Martín G, Plaza J, Zortea M, Sánchez S. Recent Developments in Endmember Extraction and Spectral Unmixing. Optical Remote Sensing. 2011:235–67. doi: 10.1007/978-3-642-14212-3_12.
Gruninger J, Ratkowski AJ, Hoke ML, Lewis PE. The sequential maximum angle convex cone (SMACC) endmember model. Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery (Proceedings of SPIE). 2004;5425:1–14. doi:10.1117/12.543794.
Xu Y, Shen Y. Reconstruction of the land surface temperature time series using harmonic analysis. Comput Geosci. 2013;61:126–32. doi:10.1016/j.cageo.2013.08.009.
Bradley BA, Jacob RW, Hermance JF, Mustard JF. A curve fitting procedure to derive inter-annual phenologies from time series of noisy satellite NDVI data. Remote Sens Environ. 2007;106(2):137–45. doi:10.1016/j.rse.2006.08.002.
Julien Y, Sobrino JA. Comparison of cloud-reconstruction methods for time series of composite NDVI data. Remote Sens Environ. 2010;114(3):618–25. doi:10.1016/j.rse.2009.11.001.
Our research was supported by the National Science Foundation of China (Grant NO. U1301253), and the International Postdoctoral Exchange Fellowship Program 2017 (Grant NO. 20170029).
Readers who are interested in using the methods to process time series remote sensing data, the source code of this project can be viewed and checked out from a Github repository,Footnote 1 and it is written in IDL. The source code is also available within each release. Questions are welcomed through E-mail contact: [email protected] (panzhuokun).
Institute of Geoimformation Engineering, South China Agricultural University, Guangzhou, China
Zhuokun Pan
& Yueming Hu
Key Laboratory of Construction Land Transformation, Ministry of Land and Resources, Guangzhou, China
College of Marine Science, Shanghai Ocean University, Shanghai, China
Bin Cao
Search for Zhuokun Pan in:
Search for Yueming Hu in:
Search for Bin Cao in:
ZP originally designed and conduct the research, and wrote this manuscript; YH is Pan's supervisor who provides additional funding for his research; BC had tested the program and made a revision before submitting. All authors read and approved the final manuscript.
Correspondence to Zhuokun Pan.
Pan, Z., Hu, Y. & Cao, B. Construction of smooth daily remote sensing time series data: a higher spatiotemporal resolution perspective. Open geospatial data, softw. stand. 2, 25 (2017) doi:10.1186/s40965-017-0038-z
DOI: https://doi.org/10.1186/s40965-017-0038-z
High spatiotemporal
HJ-1 A/B
IDL program | CommonCrawl |
The initial-boundary value problem for the biharmonic Schrödinger equation on the half-line
CPAA Home
Almost periodicity analysis for a delayed Nicholson's blowflies model with nonlinear density-dependent mortality term
November 2019, 18(6): 3317-3336. doi: 10.3934/cpaa.2019149
The regularity of a degenerate Goursat problem for the 2-D isothermal Euler equations
Yanbo Hu 1,, and Tong Li 2,
Department of Mathematics, Hangzhou Normal University, Hangzhou, 311121, China
Department of Mathematics, University of Iowa, Iowa City, IA 52242, United States
Received August 2018 Revised February 2019 Published May 2019
Fund Project: The first author was supported by NSF of Zhejiang Province LY17A010019, NSFC 11301128, 11571088 and China Scholarship Council 201708330155
Figure(3)
We study the regularity of solution and of sonic boundary to a degenerate Goursat problem originated from the two-dimensional Riemann problem of the compressible isothermal Euler equations. By using the ideas of characteristic decomposition and the bootstrap method, we show that the solution is uniformly ${C^{1,\frac{1}{6}}}$ up to the degenerate sonic boundary and that the sonic curve is ${C^{1,\frac{1}{6}}}$.
Keywords: Compressible Euler equations, semi-hyperbolic patch, degenerate Goursat problem, sonic curve, characteristic decomposition.
Mathematics Subject Classification: 35L65, 35L80, 35R35.
Citation: Yanbo Hu, Tong Li. The regularity of a degenerate Goursat problem for the 2-D isothermal Euler equations. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3317-3336. doi: 10.3934/cpaa.2019149
X. Chen and Y. X. Zheng, The direct approach to the interaction of rarefaction waves of the two-dimensional Euler equations, Indiana Univ. Math. J., 59 (2010), 231-256. doi: 10.1512/iumj.2010.59.3752. Google Scholar
J. D. Cole and L. P. Cook, Transonic Aerodynamics, North-Holland Series in Applied Mathematics and Mechanics, 1986. doi: 10.1137/1.9781611970975. Google Scholar
R. Courant and K. O. Friedrichs, Supersonic Flow and Shock Waves, Interscience, New York, 1948. Google Scholar
Z. H. Dai and T. Zhang, Existence of a global smooth solution for a degenerate Goursat problem of gas dynamics, Arch. Ration. Mech. Anal., 155 (2000), 277-298. doi: 10.1007/s002050000113. Google Scholar
G. Glimm, X. Ji, J. Li, X. Li, P. Zhang, T. Zhang and Y. Zheng, Transonic shock formation in a rarefaction Riemann problem for the 2-D compressible Euler equations, SIAM J. Appl. Math., 69 (2008), 720-742. doi: 10.1137/07070632X. Google Scholar
Y. B. Hu and J. Q. Li, Sonic-supersonic solutions for the two-dimensional steady full Euler equations, submitted, 2017.Google Scholar
Y. B. Hu, J. Q. Li and W. C. Sheng, Degenerate Goursat-type boundary value problems arising from the study of two-dimensional isothermal Euler equations, Z. Angew. Math. Phys., 63 (2012), 1021-1046. doi: 10.1007/s00033-012-0203-2. Google Scholar
Y. B. Hu and T. Li, An improved regularity result of semi-hyperbolic patch problems for the 2-D isentropic Euler equations, J. Math. Anal. Appl., 467 (2018), 1174-1193. doi: 10.1016/j.jmaa.2018.07.064. Google Scholar
Y. B. Hu and G. D. Wang, Semi-hyperbolic patches of solutions to the two-dimensional nonlinear wave system for Chaplygin gases, J. Differential Equations, 257 (2014), 1579-1590. doi: 10.1016/j.jde.2014.05.020. Google Scholar
G. Lai and W. C. Sheng, Centered wave bubbles with sonic boundary of pseudosteady Guderley Mach reflection configurations in gas dynamics, J. Math. Pure Appl., 104 (2015), 179-206. doi: 10.1016/j.matpur.2015.02.005. Google Scholar
J. Q. Li, Z. C. Yang and Y. X. Zheng, Characteristic decompositions and interactions of rarefaction waves of 2-D Euler equations, J. Differential Equations, 250 (2011), 782-798. doi: 10.1016/j.jde.2010.07.009. Google Scholar
J. Q. Li, T. Zhang and S. L. Yang, The Two-Dimensional Riemann Problem in Gas Dynamics, Longman, Harlow, 1998. Google Scholar
J. Q. Li, T. Zhang and Y. X. Zheng, Simple waves and a characteristic decomposition of the two dimensional compressible Euler equations, Comm. Math. Phys., 267 (2006), 1-12. doi: 10.1007/s00220-006-0033-1. Google Scholar
J. Q. Li and Y. X. Zheng, Interaction of rarefaction waves of the two-dimensional self-similar Euler equations, Arch. Rat. Mech. Anal., 193 (2009), 623-657. doi: 10.1007/s00205-008-0140-6. Google Scholar
J. Q. Li and Y. Zheng, Interaction of four rarefaction waves in the bi-symmetric class of the two-dimensional Euler equations, Comm. Math. Phys., 296 (2010), 303-321. doi: 10.1007/s00220-010-1019-6. Google Scholar
M. J. Li and Y. X. Zheng, Semi-hyperbolic patches of solutions of the two-dimensional Euler equations, Arch. Rational Mech. Anal., 201 (2011), 1069-1096. doi: 10.1007/s00205-011-0410-6. Google Scholar
W. C. Sheng, G. D. Wang and T. Zhang, Critical transonic shock and supersonic bubble in oblique rarefaction wave reflection along a compressive corner, SIAM J. Appl. Math., 70 (2010), 3140-3155. doi: 10.1137/090760362. Google Scholar
W. C. Sheng and S. K. You, Interaction of a centered simple wave and a planar rarefaction wave of the two-dimensional Euler equations for pseudo-steady compressible flow, J. Math. Pures Appl., 114 (2018), 29-50. doi: 10.1016/j.matpur.2017.07.019. Google Scholar
K. Song, Semi-hyperbolic patches arising from a transonic shock in simple waves interaction, J. Korean Math. Soc., 50 (2013), 945-957. doi: 10.4134/JKMS.2013.50.5.945. Google Scholar
K. Song, Q. Wang and Y. X. Zheng, The regularity of semihyperbolic patches near sonic lines for the 2-D Euler system in gas dynamics, SIAM J. Math. Anal., 47 (2015), 2200-2219. doi: 10.1137/140964382. Google Scholar
K. Song and Y. X. Zheng, Semi-hyperbolic patches of solutions of the pressure gradient system, Discrete Contin. Dyn. Syst. A, 24 (2009), 1365-1380. doi: 10.3934/dcds.2009.24.1365. Google Scholar
A. M. Tesdall, R. Sanders and B. L. Keyfitz, The triple point paradox for the nonlinear wave system, SIAM J. Appl. Math., 67 (2006), 321-336. doi: 10.1137/060660758. Google Scholar
A. M. Tesdall, R. Sanders and B. L. Keyfitz, Self-similar solutions for the triple point paradox in gasdynamics, SIAM J. Appl. Math., 68 (2008), 1360-1377. doi: 10.1137/070698567. Google Scholar
Q. Wang and Y. X. Zheng, The regularity of semi-hyperbolic patches at sonic lines for the pressure gradient equation in gas dynamics, Indiana Univ. Math. J., 63 (2014), 385-402. doi: 10.1512/iumj.2014.63.5244. Google Scholar
T. Y. Zhang and Y. X. Zheng, Sonic-supersonic solutions for the steady Euler equations, Indiana Univ. Math. J., 63 (2014), 1785-1817. doi: 10.1512/iumj.2014.63.5434. Google Scholar
T. Y. Zhang and Y. X. Zheng, The structure of solutions near a sonic line in gas dynamics via the pressure gradient equation, J. Math. Anal. Appl., 443 (2016), 39-56. doi: 10.1016/j.jmaa.2016.04.002. Google Scholar
T. Y. Zhang and Y. X. Zheng, Existence of classical sonic-supersonic solutions for the pseudo steady Euler equations (in Chinese), Scientia Sinica Mathematica, 47 (2017), 1-18. doi: 10.1512/iumj.2014.63.5434. Google Scholar
Y. X. Zheng, Systems of Conservation Laws: Two-Dimensional Riemann Problems, Birkhauser, Boston, 2001. doi: 10.1007/978-1-4612-0141-0. Google Scholar
Figure 1. The semi-hyperbolic patch
Figure 2. Case 2
Figure 3. The region of $ \Omega_\nu(\bar{z}) $
Yuxi Zheng. Absorption of characteristics by sonic curve of the two-dimensional Euler equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 605-616. doi: 10.3934/dcds.2009.23.605
Jianjun Chen, Geng Lai. Semi-hyperbolic patches of solutions to the two-dimensional compressible magnetohydrodynamic equations. Communications on Pure & Applied Analysis, 2019, 18 (2) : 943-958. doi: 10.3934/cpaa.2019046
Kyungwoo Song, Yuxi Zheng. Semi-hyperbolic patches of solutions of the pressure gradient system. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1365-1380. doi: 10.3934/dcds.2009.24.1365
Hiroki Sumi, Mariusz Urbański. Measures and dimensions of Julia sets of semi-hyperbolic rational semigroups. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 313-363. doi: 10.3934/dcds.2011.30.313
Shuxing Chen, Gui-Qiang Chen, Zejun Wang, Dehua Wang. A multidimensional piston problem for the Euler equations for compressible flow. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 361-383. doi: 10.3934/dcds.2005.13.361
Magdalena Caubergh, Freddy Dumortier, Stijn Luca. Cyclicity of unbounded semi-hyperbolic 2-saddle cycles in polynomial Lienard systems. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 963-980. doi: 10.3934/dcds.2010.27.963
Zhi-Qiang Shao. Global existence of classical solutions of Goursat problem for quasilinear hyperbolic systems of diagonal form with large BV data. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2739-2752. doi: 10.3934/cpaa.2013.12.2739
Renjun Duan, Shuangqian Liu. Cauchy problem on the Vlasov-Fokker-Planck equation coupled with the compressible Euler equations through the friction force. Kinetic & Related Models, 2013, 6 (4) : 687-700. doi: 10.3934/krm.2013.6.687
Chengchun Hao. Remarks on the free boundary problem of compressible Euler equations in physical vacuum with general initial densities. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2885-2931. doi: 10.3934/dcdsb.2015.20.2885
Tung Chang, Gui-Qiang Chen, Shuli Yang. On the 2-D Riemann problem for the compressible Euler equations I. Interaction of shocks and rarefaction waves. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 555-584. doi: 10.3934/dcds.1995.1.555
Tung Chang, Gui-Qiang Chen, Shuli Yang. On the 2-D Riemann problem for the compressible Euler equations II. Interaction of contact discontinuities. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 419-430. doi: 10.3934/dcds.2000.6.419
Ping Chen, Ting Zhang. A vacuum problem for multidimensional compressible Navier-Stokes equations with degenerate viscosity coefficients. Communications on Pure & Applied Analysis, 2008, 7 (4) : 987-1016. doi: 10.3934/cpaa.2008.7.987
Young-Pil Choi. Compressible Euler equations interacting with incompressible flow. Kinetic & Related Models, 2015, 8 (2) : 335-358. doi: 10.3934/krm.2015.8.335
Qing Chen, Zhong Tan. Time decay of solutions to the compressible Euler equations with damping. Kinetic & Related Models, 2014, 7 (4) : 605-619. doi: 10.3934/krm.2014.7.605
Jianwei Yang, Ruxu Lian, Shu Wang. Incompressible type euler as scaling limit of compressible Euler-Maxwell equations. Communications on Pure & Applied Analysis, 2013, 12 (1) : 503-518. doi: 10.3934/cpaa.2013.12.503
Shu Wang, Chundi Liu. Boundary Layer Problem and Quasineutral Limit of Compressible Euler-Poisson System. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2177-2199. doi: 10.3934/cpaa.2017108
Young-Sam Kwon. Strong traces for degenerate parabolic-hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1275-1286. doi: 10.3934/dcds.2009.25.1275
Joachim Escher, Rossen Ivanov, Boris Kolev. Euler equations on a semi-direct product of the diffeomorphisms group by itself. Journal of Geometric Mechanics, 2011, 3 (3) : 313-322. doi: 10.3934/jgm.2011.3.313
Emanuel-Ciprian Cismas. Euler-Poincaré-Arnold equations on semi-direct products II. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 5993-6022. doi: 10.3934/dcds.2016063
Hong Cai, Zhong Tan. Stability of stationary solutions to the compressible bipolar Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4677-4696. doi: 10.3934/dcds.2017201
Yanbo Hu Tong Li | CommonCrawl |
Scalars, Vectors, Matrices and Tensors - Linear Algebra for Deep Learning (Part 1)
Back in March we ran a content survey and found that many of you were interested in a refresher course for the key mathematical topics needed to understand deep learning and quant finance in general.
Since deep learning is going to be a big part of this year's content we thought it would be worthwhile to write some beginner tutorials on the key mathematical topics—linear algebra, calculus and probability—that are necessary to really understand deep learning for quant trading.
This article is the first in the series of posts on the topic of Linear Algebra for Deep Learning. It is intended to get you up to scratch in some of the basic ideas and notation that will be found in the more advanced deep learning textbooks and research papers. Reading these papers is absolutely crucial to find the best quantitative trading methods and as such it helps to speak the language!
Linear algebra is a fundamental topic in the subject of mathematics and is extremely pervasive in the physical sciences. It also forms the backbone of many machine learning algorithms. Hence it is crucial for the deep learning practitioner to understand the core ideas.
Linear algebra is a branch of continuous, rather than discrete mathematics. The mathematician, physicist, engineer and quant will likely be familiar with continuous mathematics through the study of differential equations, which are used to model many physical and financial phenomena.
The computer scientist, software developer or retail discretionary trader however may only have gained exposure to mathematics through subjects such as graph theory or combinatorics—topics found within discrete mathematics. Hence the set and function notation presented here may be initially unfamiliar.
For this reason the discussion presented in this article series will omit the usual "theorem and proof" approach of an undergraduate mathematics textbook. Instead the focus will be on selected topics that are relevant to deep learning practitioners from diverse backgrounds.
Please note that the outline of linear algebra presented in this article series closely follows the notation and excellent treatments of Goodfellow et al (2016)[3], Blyth and Robertson (2002)[1] and Strang (2016)[2].
Linear algebra, probability and calculus are the 'languages' in which machine learning is written. Learning these topics will provide a deeper understanding of the underlying algorithmic mechanics and allow development of new algorithms, which can ultimately be deployed as more sophisticated quantitative trading strategies.
Many supervised machine learning and deep learning algorithms largely entail optimising a loss function by adjusting model parameters. To carry this out requires some notion of how the loss function changes as the parameters of the model are varied.
This immediately motivates calculus—the elementary topic in mathematics which describes changes of quantities with respect to another. In particular it requires the concept of a partial derivative, which specifies how the loss function is altered through individual changes in each parameter.
These partial derivatives are often grouped together—in matrices—to allow more straightforward calculation. Even the most elementary machine learning models such as linear regression are optimised with these linear algebra techniques.
A key topic in linear algebra is that of vector and matrix notation. Being able to 'read the language' of linear algebra will open up the ability to understand textbooks, web posts and research papers that contain more complex model descriptions. This will not only allow reproduction and verification of existing models, but will allow extensions and new developments that can subsequently be deployed in trading strategies.
Linear algebra provides the first steps into vectorisation, presenting a deeper way of thinking about parallelisation of certain operations. Algorithms written in standard 'for-loop' notation can be reformulated as matrix equations providing significant gains in computational efficiency.
Such methods are used in the major Python libraries such as NumPy, SciPy, Scikit-Learn, Pandas and Tensorflow. GPUs have been designed to carry out optimised linear algebra operations. The explosive growth in deep learning can partially be attributed to the highly parallelised nature of the underlying algorithms on commodity GPU hardware.
Linear algebra is a continuous mathematics subject but ultimately the entities discussed below are implemented in a discrete computational environment. These discrete representations of linear algebra entities can lead to issues of overflow and underflow, which represent the limits of effectively representing extremely large and small numbers computationally.
One mechanism for mitigating the effects of limited numerical presentation is to make use of matrix factorisation techniques. Such techniques allow certain matrices to be represented in terms of simpler, structured matrices that have useful computational properties.
Matrix decomposition techniques include Lower Upper (LU) decomposition, QR decomposition and Singular Value Decomposition (SVD). They are an intrinsic component of certain machine learning algorithms including Linear Least Squares and Pricipal Components Analysis (PCA). Matrix decomposition will be discussed at length later in this series.
It can not be overemphasised how fundamental linear algebra is to deep learning. For those that are aiming to deploy the most sophisticated quant models based on deep learning techniques—or are seeking employment at firms that are—it will be necessary to learn linear algebra extremely well.
The material in this article series will cover the bare minimum, but to understand the research frontier it will be necessary to go much further than this. Please see the References at the end of the article for a brief list on where to continue studying linear algebra.
Vectors and Matrices
The two primary mathematical entities that are of interest in linear algebra are the vector and the matrix. They are examples of a more general entity known as a tensor. Tensors possess an order (or rank), which determines the number of dimensions in an array required to represent it.
Scalars
Scalars are single numbers and are an example of a 0th-order tensor. In mathematics it is necessary to describe the set of values to which a scalar belongs. The notation $x \in \mathbb{R}$ states that the (lowercase) scalar value $x$ is an element of (or member of) the set of real-valued numbers, $\mathbb{R}$.
There are various sets of numbers of interest within machine learning. $\mathbb{N}$ represents the set of positive integers ($1, 2, 3,\ldots$). $\mathbb{Z}$ represents the integers, which include positive, negative and zero values. $\mathbb{Q}$ represents the set of rational numbers that may be expressed as a fraction of two integers.
Vectors are ordered arrays of single numbers and are an example of 1st-order tensor. Vectors are members of objects known as vector spaces. A vector space can be thought of as the entire collection of all possible vectors of a particular length (or dimension). The three-dimensional real-valued vector space, denoted by $\mathbb{R}^3$ is often used to represent our real-world notion of three-dimensional space mathematically.
More formally a vector space is an $n$-dimensional Cartesian product of a set with itself, along with proper definitions on how to add vectors and multiply them with scalar values. If all of the scalars in a vector are real-valued then the notation $\boldsymbol{x} \in \mathbb{R}^n$ states that the (boldface lowercase) vector value $\boldsymbol{x}$ is a member of the $n$-dimensional vector space of real numbers, $\mathbb{R}^n$.
Sometimes it is necessary to identify the components of a vector explicitly. The $i$th scalar element of a vector is written as $x_i$. Notice that this is non-bold lowercase since the element is a scalar. An $n$-dimensional vector itself can be explicitly written using the following notation:
\begin{equation} \boldsymbol{x}=\begin{bmatrix} \kern4pt x_1 \kern4pt \\ \kern4pt x_2 \kern4pt \\ \kern4pt \vdots \kern4pt \\ \kern4pt x_n \kern4pt \end{bmatrix} \end{equation}
Given that scalars exist to represent values why are vectors necessary? One of the primary use cases for vectors is to represent physical quantities that have both a magnitude and a direction. Scalars are only capable of representing magnitudes.
For instance scalars and vectors encode the difference between the speed of a car and its velocity. The velocity contains not only its speed but also its direction of travel. It is not difficult to imagine many more physical quantities that possess similar characteristics such as gravitational and electromagnetic forces or wind velocity.
In machine learning vectors often represent feature vectors, with their individual components specifying how important a particular feature is. Such features could include relative importance of words in a text document, the intensity of a set of pixels in a two-dimensional image or historical price values for a cross-section of financial instruments.
Matrices are rectangular arrays consisting of numbers and are an example of 2nd-order tensors. If $m$ and $n$ are positive integers, that is $m,n \in \mathbb{N}$ then the $m \times n$ matrix contains $mn$ numbers, with $m$ rows and $n$ columns.
If all of the scalars in a matrix are real-valued then a matrix is denoted with uppercase boldface letters, such as $\boldsymbol{A} \in \mathbb{R}^{m \times n}$. That is the matrix lives in a $m \times n$-dimensional real-valued vector space. Hence matrices are really vectors that are just written in a two-dimensional table-like manner.
Its components are now identified by two indices $i$ and $j$. $i$ represents the index to the matrix row, while $j$ represents the index to the matrix column. Each component of $\boldsymbol{A}$ is identified by $a_{ij}$.
The full $m \times n$ matrix can be written as:
\begin{equation} \boldsymbol{A}=\begin{bmatrix} \kern4pt a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \kern4pt \\ \kern4pt a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \kern4pt \\ \kern4pt a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \kern4pt \\ \kern4pt \vdots & \vdots & \vdots & \ddots & \vdots \kern4pt \\ \kern4pt a_{m1} & a_{m2} & a_{m3} & \ldots & a_{mn} \kern4pt \\ \end{bmatrix} \end{equation}
It is often useful to abbreviate the full matrix component display into the following expression:
\begin{equation} \boldsymbol{A} = [a_{ij}]_{m \times n} \end{equation}
Where $a_{ij}$ is referred to as the $(i,j)$-element of the matrix $\boldsymbol{A}$. The subscript of $m \times n$ can be dropped if the dimension of the matrix is clear from the context.
Note that a column vector is a size $m \times 1$ matrix, since it has $m$ rows and 1 column. Unless otherwise specified all vectors will be considered to be column vectors.
Matrices represent a type of function known as a linear map. Based on rules that will be outlined in subsequent articles, it is possible to define multiplication operations between matrices or between matrices and vectors. Such operations are immensely important across the physical sciences, quantitative finance, computer science and machine learning.
Matrices can encode geometric operations such as rotation, reflection and transformation. Thus if a collection of vectors represents the vertices of a three-dimensional geometric model in Computer Aided Design software then multiplying these vectors individually by a pre-defined rotation matrix will output new vectors that represent the locations of the rotated vertices. This is the basis of modern 3D computer graphics.
In deep learning neural network weights are stored as matrices, while feature inputs are stored as vectors. Formulating the problem in terms of linear algebra allows compact handling of these computations. By casting the problem in terms of tensors and utilising the machinery of linear algebra, rapid training times on modern GPU hardware can be obtained.
The more general entity of a tensor encapsulates the scalar, vector and the matrix. It is sometimes necessary—both in the physical sciences and machine learning—to make use of tensors with order that exceeds two.
In theoretical physics, and general relativity in particular, the Riemann curvature tensor is a 4th-order tensor that describes the local curvature of spacetime. In machine learning, and deep learning in particular, a 3rd-order tensor can be used to describe the intensity values of multiple channels (red, green and blue) from a two-dimensional image.
Tensors will be identified in this series of posts via the boldface sans-serif notation, $\textsf{A}$. For a 3rd-order tensor elements will be given by $a_{ijk}$, whereas for a 4th-order tensor elements will be given by $a_{ijkl}$.
In the next article the basic operations of matrix-vector and matrix-matrix multiplication will be outlined. This topic is collectively known as matrix algebra.
Matrix Algebra - Linear Algebra for Deep Learning (Part 2)
[1] Blyth, T.S. and Robertson, E.F. (2002) Basic Linear Algebra, 2nd Ed., Springer
[2] Strang, G. (2016) Introduction to Linear Algebra, 5th Ed., Wellesley-Cambridge Press
[3] Goodfellow, I.J., Bengio, Y., Courville, A. (2016) Deep Learning, MIT Press | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.